WorldWideScience

Sample records for accurately estimate excess

  1. Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches

    Directory of Open Access Journals (Sweden)

    Friedrich Jan O

    2009-01-01

    , although far from statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated.

  2. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup......The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...

  3. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  4. Excess functions and estimation of the extreme-value index

    OpenAIRE

    Beirlant, Jan; Vynckier, Petra; Teugels, Josef L.

    1996-01-01

    A general class of estimators of the extreme-value index is generated using estimates of mean, median and trimmed excess functions. Special cases yield earlier proposals in the literature, such as Pickands' (1975) estimator. A particular restatement of the mean excess function yields an estimator which can be derived from the slope at the right upper tail from a generalized quantile plot. From this viewpoint algorithms can be constructed to search for the number of extremes needed to minimize...

  5. Accurate estimator of correlations between asynchronous signals

    OpenAIRE

    Toth, Bence; Kertesz, Janos

    2008-01-01

    The estimation of the correlation between time series is often hampered by the asynchronicity of the signals. Cumulating data within a time window suppresses this source of noise but weakens the statistics. We present a method to estimate correlations without applying long time windows. We decompose the correlations of data cumulated over a long window using decay of lagged correlations as calculated from short window data. This increases the accuracy of the estimated correlation significantl...

  6. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  7. Star Position Estimation Improvements for Accurate Star Tracker Attitude Estimation

    OpenAIRE

    Delabie, Tjorven

    2015-01-01

    This paper presents several methods to improve the estimation of the star positions in a star tracker, using a Kalman Filter. The accuracy with which the star positions can be estimated greatly influences the accuracy of the star tracker attitude estimate. In this paper, a Kalman Filter with low computational complexity, that can be used to estimate the star positions based on star tracker centroiding data and gyroscope data is discussed. The performance of this Kalman Filter can be increased...

  8. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  9. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  10. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  11. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  12. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  14. Evaluation of accurate eye corner detection methods for gaze estimation

    OpenAIRE

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  15. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Science.gov (United States)

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  16. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  17. Accurate estimators of correlation functions in Fourier space

    Science.gov (United States)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  18. How utilities can achieve more accurate decommissioning cost estimates

    International Nuclear Information System (INIS)

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  19. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  20. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Directory of Open Access Journals (Sweden)

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  1. Using inpainting to construct accurate cut-sky CMB estimators

    CERN Document Server

    Gruetjen, H F; Liguori, M; Shellard, E P S

    2015-01-01

    The direct evaluation of manifestly optimal, cut-sky CMB power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodised pseudo-$C_l$ (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodisation and a novel low-l leaning scheme. Providing an analytic argument why the loca...

  2. Simulation model accurately estimates total dietary iodine intake

    NARCIS (Netherlands)

    Verkaik-Kloosterman, J.; Veer, van 't P.; Ocke, M.C.

    2009-01-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and p

  3. Accurate photometric redshift probability density estimation - method comparison and application

    CERN Document Server

    Rau, Markus Michael; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-01-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which vastly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, that can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitudes less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular Neural Network code (ANNz). In our use case, this improvemen...

  4. Accurate walking and running speed estimation using wrist inertial data.

    Science.gov (United States)

    Bertschi, M; Celka, P; Delgado-Gonzalo, R; Lemay, M; Calvo, E M; Grossenbacher, O; Renevey, Ph

    2015-08-01

    In this work, we present an accelerometry-based device for robust running speed estimation integrated into a watch-like device. The estimation is based on inertial data processing, which consists in applying a leg-and-arm dynamic motion model to 3D accelerometer signals. This motion model requires a calibration procedure that can be done either on a known distance or on a constant speed period. The protocol includes walking and running speeds between 1.8km/h and 19.8km/h. Preliminary results based on eleven subjects are characterized by unbiased estimations with 2(nd) and 3(rd) quartiles of the relative error dispersion in the interval ±5%. These results are comparable to accuracies obtained with classical foot pod devices. PMID:26738169

  5. Techniques of HRV accurate estimation using a photoplethysmographic sensor

    OpenAIRE

    Álvarez Gómez, Laura

    2015-01-01

    The student will obtain a database measuring in at least 20 volunteers 50 minutes of two channel ECG, distal pulse measured at the finger and breathing while listening to music. After measurements, the student will develop algorithms to ascertain which is the proper processing of the pulse signal to estimate the heart rate variability when compared to that obtained with the ECG. Breathing effect on errors will be assessed In order to facilitate the study of the heart rate variability (HRV)...

  6. Simulation model accurately estimates total dietary iodine intake.

    Science.gov (United States)

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  7. How accurate are the time delay estimates in gravitational lensing?

    CERN Document Server

    Cuevas-Tello, J C; Tino, P; Cuevas-Tello, Juan C.; Raychaudhury, Somak; Tino, Peter

    2006-01-01

    We present a novel approach to estimate the time delay between light curves of multiple images in a gravitationally lensed system, based on Kernel methods in the context of machine learning. We perform various experiments with artificially generated irregularly-sampled data sets to study the effect of the various levels of noise and the presence of gaps of various size in the monitoring data. We compare the performance of our method with various other popular methods of estimating the time delay and conclude, from experiments with artificial data, that our method is least vulnerable to missing data and irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we use our method to determine the time delays between the two images of quasar Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if only the observations at epochs common to both wavelengths are used, the time delay gives consistent estimates, which can be combined to yield 408\\pm 12 days. The full 6 cm dataset, ...

  8. Accurate Estimators of Correlation Functions in Fourier Space

    CERN Document Server

    Sefusatti, Emiliano; Scoccimarro, Roman; Couchman, Hugh

    2015-01-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on Fast Fourier Transforms (FFTs), which are affected by aliasing from unresolved small scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per-cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher-order interpolation kernels than the standard Cloud in Cell a...

  9. Accurate tempo estimation based on harmonic + noise decomposition

    Directory of Open Access Journals (Sweden)

    Bertrand David

    2007-01-01

    Full Text Available We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  10. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  11. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    CERN Document Server

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  12. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Excess Risk Estimates for Public Highway-Rail Grade Crossings G Appendix G to Part 222 Transportation Other Regulations Relating to Transportation... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for...

  13. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    Directory of Open Access Journals (Sweden)

    Hossain Md Monir

    2008-12-01

    Full Text Available Abstract Background Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. Methods and Results U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories ( Conclusion Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.

  14. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  15. Further result in the fast and accurate estimation of single frequency

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new fast and accurate method for estimating the frequency of a complex sinusoid in complex white Gaussian environments is proposed.The new estimator comprises of applications of low-pass filtering,decimation, and frequency estimation by linear prediction.It is computationally efficient yet obtains the Cramer-Rao bound at moderate signal-to-noise ratios.And it is well suited for real time applications requiring precise frequency estimation.Simulation results are included to demonstrate the performance of the proposed method.

  16. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    Science.gov (United States)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  17. Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation

    Science.gov (United States)

    Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke

    We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.

  18. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    OpenAIRE

    Saeed Sepasi; Leon R. Roose; Marc M. Matsuura

    2015-01-01

    As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs), hybrid electric vehicles (HEVs) and smart grids. In these applications, the battery management system (BMS) requires an accurate online estimation of the state of charge (SOC) in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes...

  19. On the accurate estimation of gap fraction during daytime with digital cover photography

    Science.gov (United States)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  20. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  1. Simple, Fast and Accurate Photometric Estimation of Specific Star Formation Rate

    CERN Document Server

    Stensbo-Smidt, Kristoffer; Igel, Christian; Zirm, Andrew; Pedersen, Kim Steenstrup

    2015-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study considers the task of estimating the specific star formation rate (sSFR) of galaxies. It is shown that a nearest neighbours algorithm can produce better sSFR estimates than traditional SED fitting. We show that we can obtain accurate estimates of the sSFR even at high redshifts using only broad-band photometry based on the u, g, r, i and z filters from Sloan Digital Sky Survey (SDSS). We addtional...

  2. ACCURATE LOCATION ESTIMATION OF MOVING OBJECT WITH ENERGY CONSTRAINT & ADAPTIVE UPDATE ALGORITHMS TO SAVE DATA

    Directory of Open Access Journals (Sweden)

    Vijay Bhaskar Semwal

    2011-08-01

    Full Text Available In research paper “Accurate estimation of the target location of object with energy constraint &Adaptive Update Algorithms to Save Data” one of the central issues in sensor networks is track thelocation, of moving object which have overhead of saving data, an accurate estimation of the targetlocation of object with energy constraint .We do not have any mechanism which control and maintaindata .The wireless communication bandwidth is also very limited. Some field which is using thistechnique are flood and typhoon detection, forest fire detection, temperature and humidity and ones wehave these information use these information back to a central air conditioning and ventilation system.In this research paper, we propose protocol based on the prediction and adaptive basedalgorithm which is using less sensor node reduced by an accurate estimation of the target location. weare using minimum three sensor node to get the accurate position .We can extend it upto four or five tofind more accurate location but we have energy constraint so we are using three with accurateestimation of location help us to reduce sensor node..We show that our tracking method performs well interms of energy saving regardless of mobility pattern of the mobile target .We extends the life time ofnetwork with less sensor node. Once a new object is detected, a mobile agent will be initiated to track theroaming path of the object. The agent is mobile since it will choose the sensor closest to the object tostay. The agent may invite some nearby slave sensors to cooperatively position the object and inhibitother irrelevant (i.e., farther sensors from tracking the object. As a result, the communication andsensing overheads are greatly reduced.

  3. Accurate DOA Estimations Using Microstrip Adaptive Arrays in the Presence of Mutual Coupling Effect

    Directory of Open Access Journals (Sweden)

    Qiulin Huang

    2013-01-01

    Full Text Available A new mutual coupling calibration method is proposed for adaptive antenna arrays and is employed in the DOA estimations to calibrate the received signals. The new method is developed via the transformation between the embedded element patterns and the isolated element patterns. The new method is characterized by the wide adaptability of element structures such as dipole arrays and microstrip arrays. Additionally, the new method is suitable not only for the linear polarization but also for the circular polarization. It is shown that accurate calibration of the mutual coupling can be obtained for the incident signals in the 3 dB beam width and the wider angle range, and, consequently, accurate [1D] and [2D] DOA estimations can be obtained. Effectiveness of the new calibration method is verified by a linearly polarized microstrip ULA, a circularly polarized microstrip ULA, and a circularly polarized microstrip UCA.

  4. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  5. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  6. Accurately Estimating the State of a Geophysical System with Sparse Observations: Predicting the Weather

    CERN Document Server

    An, Zhe; Abarbanel, Henry D I

    2014-01-01

    Utilizing the information in observations of a complex system to make accurate predictions through a quantitative model when observations are completed at time $T$, requires an accurate estimate of the full state of the model at time $T$. When the number of measurements $L$ at each observation time within the observation window is larger than a sufficient minimum value $L_s$, the impediments in the estimation procedure are removed. As the number of available observations is typically such that $L \\ll L_s$, additional information from the observations must be presented to the model. We show how, using the time delays of the measurements at each observation time, one can augment the information transferred from the data to the model, removing the impediments to accurate estimation and permitting dependable prediction. We do this in a core geophysical fluid dynamics model, the shallow water equations, at the heart of numerical weather prediction. The method is quite general, however, and can be utilized in the a...

  7. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    Science.gov (United States)

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  8. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    Science.gov (United States)

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  9. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  10. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Directory of Open Access Journals (Sweden)

    Saeed Sepasi

    2015-06-01

    Full Text Available As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs, hybrid electric vehicles (HEVs and smart grids. In these applications, the battery management system (BMS requires an accurate online estimation of the state of charge (SOC in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes SOC estimation of Li-ion battery packs using a fuzzy-improved extended Kalman filter (fuzzy-IEKF for Li-ion cells, regardless of their age. The proposed approach introduces a fuzzy method with a new class and associated membership function that determines an approximate initial value applied to SOC estimation. Subsequently, the EKF method is used by considering the single unit model for the battery pack to estimate the SOC for following periods of battery use. This approach uses an adaptive model algorithm to update the model for each single cell in the battery pack. To verify the accuracy of the estimation method, tests are done on a LiFePO4 aged battery pack consisting of 120 cells connected in series with a nominal voltage of 432 V.

  11. An Accurate Method for the BDS Receiver DCB Estimation in a Regional Network

    Directory of Open Access Journals (Sweden)

    LI Xin

    2016-08-01

    Full Text Available An accurate approach for receiver differential code biases (DCB estimation is proposed with the BDS data obtained from a regional tracking network. In contrast to the conventional methods for BDS receiver DCB estimation, the proposed method does not require a complicated ionosphere model, as long as one reference station receiver DCB is known. The main idea for this method is that the ionosphere delay is highly dependent on the geometric ranges between the BDS satellite and the receiver normally. Therefore, the non-reference station receivers DCBs in this regional area can be estimated using single difference (SD with reference stations. The numerical results show that the RMS of these estimated BDS receivers DCBs errors over 30 days are about 0.3 ns. Additionally, after deduction of these estimated receivers DCBs and knowing satellites DCBs, the extractive diurnal VTEC showed a good agreement with the diurnal VTEC gained from the GIM interpolation, indicating the reliability of the estimated receivers DCBs.

  12. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    Science.gov (United States)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  13. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  14. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  15. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    Science.gov (United States)

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  16. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Science.gov (United States)

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  17. Accurate satellite-derived estimates of the tropospheric ozone impact on the global radiation budget

    Directory of Open Access Journals (Sweden)

    J. Joiner

    2009-07-01

    Full Text Available Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the radiative effect of tropospheric O3 for January and July 2005. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our derived radiative effect reflects the unadjusted (instantaneous effect of the total tropospheric O3 rather than the anthropogenic component. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS Aura Ozone Monitoring Instrument (OMI and Microwave Limb Sounder (MLS by incorporating cloud pressure information from OMI. We focus specifically on the magnitude and spatial structure of the cloud effect on both the short- and long-wave radiative budget. The estimates presented here can be used to evaluate the various aspects of model-generated radiative forcing. For example, our derived cloud impact is to reduce the radiative effect of tropospheric ozone by ~16%. This is centered within the published range of model-produced cloud effect on unadjusted ozone radiative forcing.

  18. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    Science.gov (United States)

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  19. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    Science.gov (United States)

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  20. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Science.gov (United States)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  1. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  2. Accurate estimation of the RMS emittance from single current amplifier data

    International Nuclear Information System (INIS)

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H- ion source

  3. Validation of a wrist monitor for accurate estimation of RR intervals during sleep.

    Science.gov (United States)

    Renevey, Ph; Sola, J; Theurillat, P; Bertschi, M; Krauss, J; Andries, D; Sartori, C

    2013-01-01

    While the incidence of sleep disorders is continuously increasing in western societies, there is a clear demand for technologies to asses sleep-related parameters in ambulatory scenarios. The present study introduces a novel concept of accurate sensor to measure RR intervals via the analysis of photo-plethysmographic signals recorded at the wrist. In a cohort of 26 subjects undergoing full night polysomnography, the wrist device provided RR interval estimates in agreement with RR intervals as measured from standard electrocardiographic time series. The study showed an overall agreement between both approaches of 0.05 ± 18 ms. The novel wrist sensor opens the door towards a new generation of comfortable and easy-to-use sleep monitors. PMID:24110980

  4. Quick and accurate estimation of the elastic constants using the minimum image method

    Science.gov (United States)

    Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.

    2015-04-01

    A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.

  5. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  6. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  7. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Science.gov (United States)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  8. mBEEF: An accurate semi-local Bayesian error estimation density functional

    Science.gov (United States)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  9. Robust and Accurate Multiple-camera Pose Estimation Toward Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-09-01

    Full Text Available Pose estimation methods in robotics applications frequently suffer from inaccuracy due to a lack of correspondence and real-time constraints, and instability from a wide range of viewpoints, etc. In this paper, we present a novel approach for estimating the poses of all the cameras in a multi-camera system in which each camera is placed rigidly using only a few coplanar points simultaneously. Instead of solving the orientation and translation for the multi-camera system from the overlapping point correspondences among all the cameras directly, we employ homography, which can map image points with 3D coplanar-referenced points. In our method, we first establish the corresponding relations between each camera by their Euclidean geometries and optimize the homographies of the cameras; then, we solve the orientation and translation for the optimal homographies. The results from simulations and real case experiments show that our approach is accurate and robust for implementation in robotics applications. Finally, a practical implementation in a ping-pong robot is described in order to confirm the validity of our approach.

  10. Optimization of Correlation Kernel Size for Accurate Estimation of Myocardial Contraction and Relaxation

    Science.gov (United States)

    Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    rates estimated using different kernel sizes were examined using the normalized mean-squared error of the estimated strain rate from the actual one obtained by the 1D phase-sensitive method. Compared with conventional kernel sizes, this result shows the possibility of the proposed correlation kernel to enable more accurate measurement of the strain rate. In in vivo measurement, the regional instantaneous velocities and strain rates in the radial direction of the heart wall were analyzed in detail at an extremely high temporal resolution (frame rate of 860 Hz). In this study, transition in contraction and relaxation was able to be detected by 2D tracking. These results indicate the potential of this method in the high-accuracy estimation of the strain rates and detailed analyses of the physiological function of the myocardium.

  11. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    Directory of Open Access Journals (Sweden)

    Eliane Lopes Rosado

    2014-03-01

    Full Text Available Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE, compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect calorimetry with respiratory hood. Results: In G1 and G2, it was found that the estimates obtained by Harris-Benedict, Shofield, FAO/WHO/ ONU and Henry & Rees did not differ from EE using indirect calorimetry, which presented higher values than the equations proposed by Owen, Mifflin-St Jeor and Oxford. For G1 and G2 the predictive equation closest to the value obtained by the indirect calorimetry was the FAO/WHO/ONU (7.9% and 0.46% underestimation, respectively, followed by Harris-Benedict (8.6% and 1.5% underestimation, respectively. Conclusion: The equations proposed by FAO/WHO/ ONU, Harris-Benedict, Shofield and Henry & Rees were adequate to estimate the EE in a sample of Brazilian and Spanish women with excess body weight. The other equations underestimated the EE.

  12. Rapid estimation of excess mortality: nowcasting during the heatwave alert in England and Wales in June 2011

    OpenAIRE

    Helen K. Green; Andrews, Nick J; Bickler, Graham; Pebody, Richard G.

    2012-01-01

    Background A Heat-Health Watch system has been established in England and Wales since 2004 as part of the national heatwave plan following the 2003 European-wide heatwave. One important element of this plan has been the development of a timely mortality surveillance system. This article reports the findings and timeliness of a daily mortality model used to ‘nowcast’ excess mortality (utilising incomplete surveillance data to estimate the number of deaths in near-real time) during a heatwave a...

  13. Analytical estimation of control rod shadowing effect for excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR)

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Masaaki; Yamashita, Kiyonobu; Fujimoto, Nozomu; Nojiri, Naoki; Takeuchi, Mitsuo; Fujisaki, Shingo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment; Tokuhara, Kazumi; Nakata, Tetsuo

    1998-05-01

    The control rod shadowing effect has been estimated analytically in application of the fuel addition method to excess reactivity measurement of High Temperature Engineering Test Reactor (HTTR). The movements of control rods in the procedure of the fuel addition method have been simulated in the analysis. The calculated excess reactivity obtained by the simulation depends on the combinations of measuring control rods and compensating control rods and varies from -10% to +50% in comparison with the excess reactivity calculated from the effective multiplication factor of the core where all control rods are fully withdrawn. The control rod shadowing effect is reduced by the use of plural number of measuring and compensation control rods because of the reduction in neutron flux deformation in the measuring procedure. As a result, following combinations of control rods are recommended; 1) Thirteen control rods of the center, first, and second rings will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other twelve control rods for reactivity compensation. 2) Six control rods of the first ring will be used for the reactivity measurement. The reactivity of each control rod is measured by the use of the other five control rods for reactivity compensation. (author)

  14. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  15. Self-estimation of Body Fat is More Accurate in College-age Males Compared to Females

    OpenAIRE

    HANCOCK, HALLEY L.; Jung, Alan P.; Petrella, John K.

    2012-01-01

    The objective was to determine the effect of gender on the ability to accurately estimate one’s own body fat percentage. Fifty-five college-age males and 99 college-age females participated. Participants estimated their own body fat percent before having their body composition measured using a BOD POD. Participants also completed a modified Social Physique Anxiety Scale (SPAS). Estimated body fat was significantly lower compared to measured body fat percent in females (26.8±5.6% vs. 30.2±7.0%...

  16. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via movi...

  17. Technical note: tree truthing: how accurate are substrate estimates in primate field studies?

    Science.gov (United States)

    Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J

    2012-04-01

    Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099

  18. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Institute of Scientific and Technical Information of China (English)

    Li Jing; Zhao Yongjun; Li Donghai

    2014-01-01

    This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL) system. An approach that combines angle with time difference of arri-val (ATDOA) is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS) technique is applied in this approach. It achieves the Cramer-Rao lower bound (CRLB) when the parameter measurements are subject to small Gauss-ian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  19. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Directory of Open Access Journals (Sweden)

    Li Jing

    2014-08-01

    Full Text Available This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL system. An approach that combines angle with time difference of arrival (ATDOA is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS technique is applied in this approach. It achieves the Cramer–Rao lower bound (CRLB when the parameter measurements are subject to small Gaussian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  20. Accurate performance estimators for information retrieval based on span bound of support vector machines

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Support vector machines have met with significant success in the information retrieval field, especially in handling text classification tasks. Although various performance estimators for SVMs have been proposed,these only focus on accuracy which is based on the leave-one-out cross validation procedure. Information-retrieval-related performance measures are always neglected in a kernel learning methodology. In this paper, we have proposed a set of information-retrieval-oriented performance estimators for SVMs, which are based on the span bound of the leave-one-out procedure. Experiments have proven that our proposed estimators are both effective and stable.

  1. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    Science.gov (United States)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  2. The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.

    Science.gov (United States)

    Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe

    2013-07-01

    There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.

  3. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Directory of Open Access Journals (Sweden)

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  4. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    International Nuclear Information System (INIS)

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  5. Accurate channel estimation for MIMO-OFDM system by exploiting cyclic prefix

    Institute of Scientific and Technical Information of China (English)

    Yuan Qingsheng; He Chen; Wang Ping

    2007-01-01

    Multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) systems promise to provide significant increase in system capacity for future wireless communication systems. The channel state information is required to achieve the high capacity of an MIMO-OFDM system. In this paper, an improved channel estimation scheme is proposed for MIMO-OFDM system by making full use of the training sequence and CP (cyclic prefix) . The method improves the performance of the channel estimator because of using the redundant information in CP. Also, the theoretical mean square error (MSE) bound of the improved estimator is derived. The effectivity of the algorithm is demonstrated by the simulation results of MIMO-OFDM systems with two transmit and two receive antennas. The MSE gain is enhanced by about 1dB.

  6. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    Science.gov (United States)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  7. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  8. Fast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data

    Directory of Open Access Journals (Sweden)

    Iliadis Alexandros

    2012-10-01

    Full Text Available Abstract Background Typically, the first phase of a genome wide association study (GWAS includes genotyping across hundreds of individuals and validation of the most significant SNPs. Allelotyping of pooled genomic DNA is a common approach to reduce the overall cost of the study. Knowledge of haplotype structure can provide additional information to single locus analyses. Several methods have been proposed for estimating haplotype frequencies in a population from pooled DNA data. Results We introduce a technique for haplotype frequency estimation in a population from pooled DNA samples focusing on datasets containing a small number of individuals per pool (2 or 3 individuals and a large number of markers. We compare our method with the publicly available state-of-the-art algorithms HIPPO and HAPLOPOOL on datasets of varying number of pools and marker sizes. We demonstrate that our algorithm provides improvements in terms of accuracy and computational time over competing methods for large number of markers while demonstrating comparable performance for smaller marker sizes. Our method is implemented in the "Tree-Based Deterministic Sampling Pool" (TDSPool package which is available for download at http://www.ee.columbia.edu/~anastas/tdspool. Conclusions Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.

  9. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer;

    2016-01-01

    angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo...

  10. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  11. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  12. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    Science.gov (United States)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  13. A novel method based on two cameras for accurate estimation of arterial oxygen saturation

    OpenAIRE

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-01-01

    Background Photoplethysmographic imaging (PPGi) that is based on camera allows acquiring photoplethysmogram and measuring physiological parameters such as pulse rate, respiration rate and perfusion level. It has also shown potential for estimation of arterial oxygen saturation (SaO2). However, there are some technical limitations such as optical shunting, different camera sensitivity to different light spectra, different AC-to-DC ratios (the peak-to-peak amplitude to baseline ratio) of the PP...

  14. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Directory of Open Access Journals (Sweden)

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  15. New Method for Accurate Parameter Estimation of Induction Motors Based on Artificial Bee Colony Algorithm

    OpenAIRE

    Jamadi, Mohammad; Merrikh-Bayat, Farshad

    2014-01-01

    This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...

  16. Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance

    CERN Document Server

    Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2016-01-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...

  17. A Prototype-Based Gate-Level Cycle-Accurate Methodology for SoC Performance Exploration and Estimation

    Directory of Open Access Journals (Sweden)

    Ching-Lung Su

    2013-01-01

    Full Text Available A prototype-based SoC performance estimation methodology was proposed for consumer electronics design. Traditionally, prototypes are usually used in system verification before SoC tapeout, which is without accurate SoC performance exploration and estimation. This paper attempted to carefully model the SoC prototype as a performance estimator and explore the environment of SoC performance. The prototype met the gate-level cycle-accurate requirement, which covered the effect of embedded processor, on-chip bus structure, IP design, embedded OS, GUI systems, and application programs. The prototype configuration, chip post-layout simulation result, and the measured parameters of SoC prototypes were merged to model a target SoC design. The system performance was examined according to the proposed estimation models, the profiling result of the application programs ported on prototypes, and the timing parameters from the post-layout simulation of the target SoC. The experimental result showed that the proposed method was accompanied with only an average of 2.08% of error for an MPEG-4 decoder SoC at simple profile level 2 specifications.

  18. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    Indian Academy of Sciences (India)

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  19. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    Science.gov (United States)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  20. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Science.gov (United States)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  1. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  2. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    CERN Document Server

    Savanevych, V E; Sokovikova, N S; Bezkrovny, M M; Vavilova, I B; Ivashchenko, Yu M; Elenin, L V; Khlamov, S V; Movsesian, Ia S; Dashkova, A M; Pogorelov, A V

    2015-01-01

    We describe a new iteration method to estimate asteroid coordinates, which is based on the subpixel Gaussian model of a discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixels potential) of the CCD frame. In this model, a kind of the coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The developed method, being more flexible in adapting to any form of the object image, has a high measurement accuracy along with a low calculating complexity due to a maximum likelihood procedure, which is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for the minimisation of the quadratic form. Since 2010, the method was tested as the basis of our CoLiTec (Collection Light Technology) software, which has been installed at several observatories of the world with the ai...

  3. A novel method to obtain accurate length estimates of carnivorous reef fishes from a single video camera

    Directory of Open Access Journals (Sweden)

    Gastón A. Trobbiani

    2015-03-01

    Full Text Available In the last years, technological advances enhanced the utilization of baited underwater video (BUV to monitor the diversity, abundance, and size composition of fish assemblages. However, attempts to use static single-camera devices to estimate fish length were limited due to high errors, originated from the variable distance between the fishes and the reference scale included in the scene. In this work, we present a novel simple method to obtain accurate length estimates of carnivorous fishes by using a single downward-facing camera baited video station. The distinctive feature is the inclusion of a mirrored surface at the base of the stand that allows for correcting the apparent or "naive" length of the fish by the distance between the fish and the reference scale. We describe the calibration procedure and compare the performance (accuracy and precision of this new technique with that of other single static camera methods. Overall, estimates were highly accurate (mean relative error = -0.6% and precise (mean coefficient of variation = 3.3%, even in the range of those obtained with stereo-video methods.

  4. Evaluation of the goodness of fit of new statistical size distributions with consideration of accurate income inequality estimation

    OpenAIRE

    Masato Okamoto

    2012-01-01

    This paper compares the goodness-of-fit of two new types of parametric income distribution models (PIDMs), the kappa-generalized (kG) and double-Pareto lognormal (dPLN) distributions, with that of beta-type PIDMs using US and Italian data for the 2000s. A three-parameter model kG tends to estimate the Lorenz curve and income inequality indices more accurately when the likelihood value is similar to that of the beta-type PIDMs. For the first half of the 2000s in the USA, the kG outperforms the...

  5. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Science.gov (United States)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  6. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    Science.gov (United States)

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  7. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  8. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Science.gov (United States)

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  9. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Science.gov (United States)

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  10. Highly accurate and efficient self-force computations using time-domain methods: Error estimates, validation, and optimization

    CERN Document Server

    Thornburg, Jonathan

    2010-01-01

    If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated ...

  11. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Directory of Open Access Journals (Sweden)

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  12. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    International Nuclear Information System (INIS)

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  13. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Science.gov (United States)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  14. The absolute lymphocyte count accurately estimates CD4 counts in HIV-infected adults with virologic suppression and immune reconstitution

    Directory of Open Access Journals (Sweden)

    Barnaby Young

    2014-11-01

    Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24

  15. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  16. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Directory of Open Access Journals (Sweden)

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  17. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    Science.gov (United States)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  18. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    Institute of Scientific and Technical Information of China (English)

    Zheng Wei; Hsu Hou-Tse; Zhong Min; Yun Mei-Juan

    2009-01-01

    Firstly,the new combined error model of cumulative geoid height influenced by four error sources,including the inter-satellite range-rate of an interferometric laser (K-band) ranging system,the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer,is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach.Secondly,the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985×10-1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL),and the cumulative geoid height error is 5.825×10-2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km.The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward,and the dependability of the combined error model is validated.Finally,the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes.

  19. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    International Nuclear Information System (INIS)

    Firstly, the new combined error model of cumulative geoid height influenced by four error sources, including the inter-satellite range-rate of an interferometric laser (K-band) ranging system, the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer, is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach. Secondly, the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985 × 10−1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL), and the cumulative geoid height error is 5.825 × 10−2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km. The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward, and the dependability of the combined error model is validated. Finally, the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes. (geophysics, astronomy and astrophysics)

  20. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Science.gov (United States)

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.

  1. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Science.gov (United States)

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26924122

  2. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    International Nuclear Information System (INIS)

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  3. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    Science.gov (United States)

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  4. Excessive Daytime Sleepiness

    Directory of Open Access Journals (Sweden)

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  5. Volume estimation of the aortic sac after EVAR using 3-D ultrasound - a novel, accurate and promising technique

    DEFF Research Database (Denmark)

    Bredahl, K; Long, A; Taudorf, M;

    2013-01-01

    Volume estimation is more sensitive than diameter measurement for detection of aneurysm growth after endovascular aneurysm repair (EVAR), but this has only been confirmed on three-dimensional, reconstructed computer tomography (3-D CT). The potential of 3-D ultrasound (3-D US) for volume estimation...... in EVAR surveillance is unknown....

  6. Estimating Accurate Relative Spacecraft Angular Position from DSN VLBI Phases Using X-Band Telemetry or DOR Tones

    Science.gov (United States)

    Bagri, Durgadas S.; Majid, Walid

    2009-01-01

    At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.

  7. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Science.gov (United States)

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  8. A strategy for an accurate estimation of the basal permittivity in the Martian North Polar Layered Deposits

    CERN Document Server

    Lauro, S E; Pettinelli, E; Soldovieri, F; Cantini, F; Rossi, A P; Orosei, R

    2016-01-01

    The paper deals with the investigation of the Mars subsurface by means of data collected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding working at few MHz frequencies. A data processing strategy, which combines a simple inversion model and an accurate procedure for data selection is presented. This strategy permits to mitigate the theoretical and practical difficulties of the inverse problem arising because of the inaccurate knowledge of the parameters regarding both the scenario under investigation and the radiated electromagnetic field impinging on the Mars surface. The results presented in this paper show that, it is possible to reliably retrieve the electromagnetic properties of deeper structures, if such strategy is accurately applied. An example is given here, where the analysis of the data collected on Gemina Lingula, a region of the North Polar layer deposits, allowed us to retrieve permittivity values for the basal unit in agreement with those usually associated to the Earth basalt...

  9. Methodological extensions of meta-analysis with excess relative risk estimates. Application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    International Nuclear Information System (INIS)

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95% CI: 0.30-1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1-2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. (author)

  10. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    Science.gov (United States)

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  11. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    Science.gov (United States)

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  12. 基于泛岭估计对岭估计过度压缩的改进方法%An Improved Method based Universal Ridge Estimate for the Excess Shrinkage of Ridge Estimate

    Institute of Scientific and Technical Information of China (English)

    刘文卿

    2011-01-01

    Ridge estimate is an effective method to solve the problem of multicollinearity in multiple linear regression, and is a biased shrinkage estimate. Against ordinary least squares estimate, ridge estimate decreases mean square errors, but increases residual sum of squares. This paper proposes an improved method based universal ridge estimate for the excess shrinkage of ridge estimate. Which method can make better the effect of fit, reduces the residual sum of squares contrast to ridge estimate.%岭估计是解决多元线性回归多重共线性问题的有效方法,是有偏的压缩估计。与普通最小二乘估计相比,岭估计可以降低参数估计的均方误差,但是却增大残差平方和,拟合效果变差。本文提出一种基于泛岭估计对岭估计过度压缩的改进方法,可以改进岭估计的拟合效果,减小岭估计残差平方和的增加幅度。

  13. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    Science.gov (United States)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  14. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt

    Directory of Open Access Journals (Sweden)

    Kapil Yadav

    2015-01-01

    Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  15. HIV Excess Cancers JNCI

    Science.gov (United States)

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  16. How have ART treatment programmes changed the patterns of excess mortality in people living with HIV? Estimates from four countries in East and Southern Africa

    Directory of Open Access Journals (Sweden)

    Emma Slaymaker

    2014-04-01

    Full Text Available Background: Substantial falls in the mortality of people living with HIV (PLWH have been observed since the introduction of antiretroviral therapy (ART in sub-Saharan Africa. However, access and uptake of ART have been variable in many countries. We report the excess deaths observed in PLWH before and after the introduction of ART. We use data from five longitudinal studies in Malawi, South Africa, Tanzania, and Uganda, members of the network for Analysing Longitudinal Population-based HIV/AIDS data on Africa (ALPHA. Methods: Individual data from five demographic surveillance sites that conduct HIV testing were used to estimate mortality attributable to HIV, calculated as the difference between the mortality rates in PLWH and HIV-negative people. Excess deaths in PLWH were standardized for age and sex differences and summarized over periods before and after ART became generally available. An exponential regression model was used to explore differences in the impact of ART over the different sites. Results: 127,585 adults across the five sites contributed a total of 487,242 person years. Before the introduction of ART, HIV-attributable mortality ranged from 45 to 88 deaths per 1,000 person years. Following ART availability, this reduced to 14–46 deaths per 1,000 person years. Exponential regression modeling showed a reduction of more than 50% (HR =0.43, 95% CI: 0.32–0.58, compared to the period before ART was available, in mortality at ages 15–54 across all five sites. Discussion: Excess mortality in adults living with HIV has reduced by over 50% in five communities in sub-Saharan Africa since the advent of ART. However, mortality rates in adults living with HIV are still 10 times higher than in HIV-negative people, indicating that substantial improvements can be made to reduce mortality further. This analysis shows differences in the impact across the sites, and contrasts with developed countries where mortality among PLWH on ART can be

  17. Shorter sampling periods and accurate estimates of milk volume and components are possible for pasture based dairy herds milked with automated milking systems.

    Science.gov (United States)

    Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne

    2016-08-01

    Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967

  18. A relative risk estimation of excessive frequency of malignant tumors in population due to discharges into the atmosphere from fossil-fuel power plants and nuclear power plants

    International Nuclear Information System (INIS)

    Exposure of the population (doses to lungs, bone and whole body) due to fossil-fuel power plants (FFPP) is estimated by the example of a large modern coal FFPP taking into account the contents of 226Ra, 228Ra, 210Pb, 210Po, 40K, 232Th in the fly ash and also radon discharges. The doses produced by radionuclides mentioned above for the individuals from the population living within the range of 18km from the FFPP together with the mean collective doses all over the territory of the country used for the agricultural purposes are given. These values are compared with literary data on the doses due to discharges into the atmosphere of inert radioactive gases, 60Co, 137Cs, 90Sr and 131I from nuclear power plants (NPP). It is revealed that the total exposure risk for the near-by population due to fly ash from coal FFPP is greater by about 2 orders than the risk for individuals from the population due to the discharges from NPP at normal operating conditions. The doses produced by the discharges from FFPP working on oil are lower by 1 order than the doses due to the discharges from coal FFPP. The risk of excessive cancer frequency due to chemical carcinogens contained in the discharges from FFPP including some metals is discussed. It is noted that a more complete evaluation of the risk from NPP requires the data on the doses to the population from all the cycles of nuclear fuel production and radioactive waste disposal as well as the predicted information on collective doses per power unit of NPP due to an accident

  19. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  20. An Accurate Correlation Estimation Method of InSAR%一种InSAR相关系数精估计方法

    Institute of Scientific and Technical Information of China (English)

    韦海军; 黄海风; 朱炬波; 梁甸农

    2011-01-01

    The InSAR correlation,a measure of the similarity of interferometric SAR image pairs, provides an elemental parameter for InSAR applications. In practice,correlation is observed by comparing the radar return across several nearby radar images pixels. The estimation is biased by underlying topographic interferometer fringe and finite pixel samples. We present here a method for cor recting bias in InSAR estimation,which is resulting in significantly more accurate correlation. Firstly,we correct the underlying topographic fringe patterns by phases simulated by existing rough DFMs. A second kind statistics method is then applied to bias removal. We demonstrate the value of our method using data collected over the Etna volcano by the SIR-C/XSAR sensors.%InSAR的相关系数是对SAR干涉图像对相似度的度量,是InSAR技术应用分析的一项基本参数.相关系数的估计是通过比较两幅SAR图像中对应位置的相邻像素值得到的,由于采样点数目的限制及存在由于地形起伏引起的干涉相位,估计结果往往与真实值相差较大.本文提出了一种能够大幅减小相关系数估计误差的方法,首先利用粗精度DEM仿真地形相位以消除地形起伏对相关系数估计的影响,随后采用第二类统计方法进一步提高了相关系数的估计精度.我们用SIR-C/XSAR的Etna火山干涉数据对本文方法进行了验证.

  1. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations.

    OpenAIRE

    McMillan, K; Bostani, M; McCollough, C; McNitt-Gray, M

    2015-01-01

    PURPOSE: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. METHODS: For 10 patients who received clinically-indicated chest (n=5) and ab...

  2. An Accurate FOA and TOA Estimation Algorithm for Galileo Search and Rescue Signal%伽利略搜救信号FOA和TOA精确估计算法

    Institute of Scientific and Technical Information of China (English)

    王堃; 吴嗣亮; 韩月涛

    2011-01-01

    According to the high precision demand of Frequency of Arrival(FOA) and Time of Arrival(TOA) estimation in Galileo search and rescue(SAR) system and considering the fact that the message bit width is unknown in real received beacons,a new FOA and TOA estimation algorithm which combines the multi-dimensional joint maximum likelihood estimation algorithm and barycenter calculation algorithm is proposed.The principle of the algorithm is derived after the signal model is introduced,and the concrete realization of the estimation algorithm is given.Monte Carlo simulation results and measurement results show that when CNR equals the threshold of 34.8 dBHz,FOA and TOA estimation rmse(root-mean-square error) of this algorithm are respectively within 0.03 Hz and 9.5 μs,which are better than the system requirements of 0.05 Hz and 11 μs.This algorithm has been applied to the Galileo Medium-altitude Earth Orbit Local User Terminal(MEOLUT station).%针对伽利略搜救系统中到达频率(FOA)和到达时间(TOA)高精度估计的需求,考虑到实际接收的信标信号中信息位宽未知的情况,提出了多维联合极大似然估计算法和体积重心算法相结合的FOA和TOA估计算法。在介绍信号模型的基础上推导了算法原理,给出了估计算法的具体实现过程。Monte Carlo仿真和实测结果表明,在34.8 dBHz的处理门限下,该算法得到的FOA和TOA估计的均方根误差分别小于0.03 Hz和9.5μs,优于0.05 Hz和11μs的系统指标要求。该算法目前已应用于伽利略中轨卫星地面用户终端(MEOLUT地面站)。

  3. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237

  4. How accurately are maximal metabolic equivalents estimated based on the treadmill workload in healthy people and asymptomatic subjects with cardiovascular risk factors?

    Science.gov (United States)

    Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P

    2008-08-01

    Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.

  5. Is the predicted postoperative FEV1 estimated by planar lung perfusion scintigraphy accurate in patients undergoing pulmonary resection? Comparison of two processing methods

    International Nuclear Information System (INIS)

    Estimation of postoperative forced expiratory volume in 1 s (FEV1) with radionuclide lung scintigraphy is frequently used to define functional operability in patients undergoing lung resection. We conducted a study to outline the reliability of planar quantitative lung perfusion scintigraphy (QLPS) with two different processing methods to estimate the postoperative lung function in patients with resectable lung disease. Forty-one patients with a mean age of 57±12 years who underwent either a pneumonectomy (n=14) or a lobectomy (n=27) were included in the study. QLPS with Tc-99m macroaggregated albumin was performed. Both three equal zones were generated for each lung [zone method (ZM)] and more precise regions of interest were drawn according to their anatomical shape in the anterior and posterior projections [lobe mapping method (LMM)] for each patient. The predicted postoperative (ppo) FEV1 values were compared with actual FEV1 values measured on postoperative day 1 (pod1 FEV1) and day 7 (pod7 FEV1). The mean of preoperative FEV 1 and ppoFEV1 values was 2.10±0.57 and 1.57±0.44 L, respectively. The mean of Pod1FEV1 (1.04±0.30 L) was lower than ppoFEV1 (p0.05). PpoFEV1 values predicted by both the zone and LMMs overestimated the actual measured lung volumes in patients undergoing pulmonary resection in the early postoperative period. LMM is not superior to ZM. (author)

  6. Accurately Model of Project Estimated Budget Based on SVM-PCA and Chaos Post Processing%基于混沌后处理的SVM-PCA精确工程预算估计模型

    Institute of Scientific and Technical Information of China (English)

    周建永

    2014-01-01

    In traditional estimating method, the linear method was used and the budget estimating result was poor. An accu-rately model of project estimated budget based on SVM-PCA and chaos post processing was proposed, In the SVM model, the PCA method was used for clear and filter the number of redundant information, to ensure the contribution of the input information data, the data was output to the post-processing model, and with the chaos mixed characteristics of the data smoothing, the accuracy model of systems engineering estimated budget was achieved. Finally a team of 5 projects with 10 elements were taken as target to do experiment, the result shows that with the accurately model of project estimated budget based on SVM-PCA and chaos post processing, each engineering accurate estimates of consumption estimated core out, it has good application value.%传统的估计模型采用线性方法,预算估计结果差。提出一种基于混沌后处理的SVM-PCA精确工程预算估计模型。在SVM模型的基础上,采用PCA方法,对输入系统的一些冗余信息进行清除过滤,保证输入信息数据的贡献率,数据输出后,采用混沌后处理的方法对数据参差特性进行平滑处理,保证系统工程预算估计模型的精确性。最后采用一组10大类元素的5个工程进行预算估计实验,结果显示,采用基于混沌后处理的SVM-PCA精确工程预算估计模型后,每个工程中的核心消耗均被精确的估计出来,具有很好的工程应用价值。

  7. 证券市场指数的尖峰厚尾特征与风险量估计%The Characteristics of Excess Kurtosis and Heavy-tail of Stock Market Indies and the Estimation of Risk Measures

    Institute of Scientific and Technical Information of China (English)

    杨昕

    2012-01-01

    The empirical analysis of the four stock market indies of DOW,Nasdaq,S&P500 and FTSE100 illustrates that the distributions of the log returns have the characteristics of excess kurtosis and heavy-tail,and that Logistic distribution fits very well for the returns. At the same time,the estimation formulas of the risk measures VaR and CVaR based on Logistics distribution are given,and the risk measures of the log returns of the four stock market indies are reported.%在对DOW,Nasdaq,S&P500和FTSE100等四个证券市场指数进行实证分析基础上,展示了证券市场指数的对数收益率具有尖峰厚尾的分布特征,并利用Logistic分布得到了很好的拟合,同时给出了基于Logistic分布的风险量VaR和CVaR的估计公式,以此计算证券市场指数的对数收益率的风险量VaR和CVaR的估计值.

  8. Excess wind power

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    Expansion of wind power is an important element in Danish climate change abatement policy. Starting from a high penetration of approx 20% however, momentary excess production will become an important issue in the future. Through energy systems analyses using the EnergyPLAN model and economic...

  9. Addiction as excessive appetite.

    Science.gov (United States)

    Orford, J

    2001-01-01

    The excessive appetite model of addiction is summarized. The paper begins by considering the forms of excessive appetite which a comprehensive model should account for: principally, excessive drinking, smoking, gambling, eating, sex and a diverse range of drugs including at least heroin, cocaine and cannabis. The model rests, therefore, upon a broader concept of what constitutes addiction than the traditional, more restricted, and arguably misleading definition. The core elements of the model include: very skewed consumption distribution curves; restraint, control or deterrence; positive incentive learning mechanisms which highlight varied forms of rapid emotional change as rewards, and wide cue conditioning; complex memory schemata; secondary, acquired emotional regulation cycles, of which 'chasing', 'the abstinence violation effect' and neuroadaptation are examples; and the consequences of conflict. These primary and secondary processes, occurring within diverse sociocultural contexts, are sufficient to account for the development of a strong attachment to an appetitive activity, such that self-control is diminished, and behaviour may appear to be disease-like. Giving up excess is a natural consequence of conflict arising from strong and troublesome appetite. There is much supportive evidence that change occurs outside expert treatment, and that when it occurs within treatment the change processes are more basic and universal than those espoused by fashionable expert theories. PMID:11177517

  10. Reducing Excessive Television Viewing.

    Science.gov (United States)

    Jason, Leonard A.; Rooney-Rebeck, Patty

    1984-01-01

    A youngster who excessively watched television was placed on a modified token economy: earned tokens were used to activate the television for set periods of time. Positive effects resulted in the child's school work, in the amount of time his family spent together, and in his mother's perception of family social support. (KH)

  11. The otherness of sexuality: excess.

    Science.gov (United States)

    Stein, Ruth

    2008-03-01

    The present essay, the second of a series of three, aims at developing an experience-near account of sexuality by rehabilitating the idea of excess and its place in sexual experience. It is suggested that various types of excess, such as excess of excitation (Freud), the excess of the other (Laplanche), excess beyond symbolization and the excess of the forbidden object of desire (Leviticus; Lacan) work synergistically to constitute the compelling power of sexuality. In addition to these notions, further notions of excess touch on its transformative potential. Such notions address excess that shatters psychic structures and that is actively sought so as to enable new ones to evolve (Bersani). Work is quoted that regards excess as a way of dealing with our lonely, discontinuous being by using the "excessive" cosmic energy circulating through us to achieve continuity against death (Bataille). Two contemporary analytic thinkers are engaged who deal with the object-relational and intersubjective vicissitudes of excess.

  12. 战斗机战术技术要求的分析和全机重量的计算%An Accurate Estimate of Take-off Weight of Fighter Aircraft at Conceptual Design Stage

    Institute of Scientific and Technical Information of China (English)

    杨华保

    2001-01-01

    在对战术技术要求指标分类的基础上,提出了用重量分析法进行战术技术要求分析的方法;对全机重量的计算进行了研究,提出了用统计方法计算空机重量和用对飞行剖面分段计算燃油重量的方法。经过实际型号的验算,证明方法可行,结果令人满意。%I developed a software for estimating take-off weight of fighter aircraft according to tactical and technical requirements at the conceptual design stage. Take-off weight consists of empty weight, fuel weight and payload weight. Estimating take-off weight is an iterative process, as many factors including empty weight, fuel weight and tactical and technical requirements, are related to take-off weight. In my software, empty weight can be calculated from take-off weight by a formula deduced from statistical data relating empty weight to take-off weight. In estimating fuel weight, I divide the mission profile into several basic segments and calculate the fuel consumption of each basic segment. My software can make the iterative process of accurately estimating take-off weight of a fighter aircraft quite easy. Data of some existing fighters confirm preliminarily that my software is reliable.

  13. Excess flow shutoff valve

    Energy Technology Data Exchange (ETDEWEB)

    Kiffer, Micah S.; Tentarelli, Stephen Clyde

    2016-02-09

    Excess flow shutoff valve comprising a valve body, a valve plug, a partition, and an activation component where the valve plug, the partition, and activation component are disposed within the valve body. A suitable flow restriction is provided to create a pressure difference between the upstream end of the valve plug and the downstream end of the valve plug when fluid flows through the valve body. The pressure difference exceeds a target pressure difference needed to activate the activation component when fluid flow through the valve body is higher than a desired rate, and thereby closes the valve.

  14. Organic nutrients and excess nitrogen in the North Atlantic subtropical gyre

    Directory of Open Access Journals (Sweden)

    A. Landolfi

    2008-02-01

    Full Text Available To enable an accurate estimate of total excess nitrogen (N in the North Atlantic a new tracer, TNxs, is defined which includes the contribution of organic nutrients to the assessment of N:P stoichiometric anomalies. We estimate the spatial distribution of TNxs within the North Atlantic using data from a trans-Atlantic section across 24.5° N conducted in 2004. We then employ three different approaches to infer rates of total excess nitrogen accumulation using pCFC-12 derived ventilation ages (a TNxs vertical integration, a one end-member and a two-end member mixing model. Despite some variability among the different methods the dissolved organic nutrient fraction always contributes to about half of the TNxs accumulation, which is in the order of 9.38±4.18×1011 mol N y−1. Here we suggest that neglecting organic nutrients in stoichiometric balances of the marine N and P inventories can lead to systematic errors when estimating a nitrogen excess or deficit relative to the Redfield ratio in the oceans. For the North Atlantic the inclusion of the organic fraction leads to an upward revision of the N supply by N2 fixation to 10.2±6.9×1011 mol N y−1. This enhanced estimate of nitrogen fixation reconciles the geochemical estimates of N2 fixation derived from excess nitrate and the direct estimates from N2 fixation measurements.

  15. Abundance, Excess, Waste

    Directory of Open Access Journals (Sweden)

    Rox De Luca

    2016-02-01

    Her recent work focuses on the concepts of abundance, excess and waste. These concerns translate directly into vibrant and colourful garlands that she constructs from discarded plastics collected on Bondi Beach where she lives. The process of collecting is fastidious, as is the process of sorting and grading the plastics by colour and size. This initial gathering and sorting process is followed by threading the components onto strings of wire. When completed, these assemblages stand in stark contrast to the ease of disposability associated with the materials that arrive on the shoreline as evidence of our collective human neglect and destruction of the environment around us. The contrast is heightened by the fact that the constructed garlands embody the paradoxical beauty of our plastic waste byproducts, while also evoking the ways by which those byproducts similarly accumulate in randomly assorted patterns across the oceans and beaches of the planet.

  16. The High Price of Excessive Alcohol Consumption

    Centers for Disease Control (CDC) Podcasts

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  17. Tentativas de suicídio: fatores prognósticos e estimativa do excesso de mortalidade Tentativas de suicidio: factores pronósticos y estimativa del exceso de mortalidad Attempted suicide: prognostic factors and estimated excess mortality

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Leal Vidal

    2013-01-01

    hombres, en las personas casadas y en aquellos con edad superior a los 60 años. La razón de mortalidad estandarizada evidenció un exceso de mortalidad por suicidio. Los resultados del estudio mostraron que la tasa de mortalidad entre pacientes que intentaron el suicidio fue superior a la esperada en la población general, indicando la necesidad de mejorar los cuidados a la salud de esos individuos.This retrospective cohort study aimed to analyze the epidemiological profile of individuals that attempted suicide from 2003 to 2009 in Barbacena, Minas Gerais State, Brazil, to calculate the mortality rate from suicide and other causes, and to estimate the risk of death in these individuals. Data were collected from police reports and death certificates. Survival analysis was performed and Cox multiple regression was used. Among the 807 individuals that attempted suicide, there were 52 deaths: 12 by suicide, 10 from external causes, and 30 from other causes. Ninety percent of suicide deaths occurred within 24 months after the attempt. Risk of death was significantly greater in males, married individuals, and individuals over 60 years of age. Standardized mortality ratio showed excess mortality by suicide. The findings showed that the mortality rate among patients that had attempted suicide was higher than expected in the general population, indicating the need to improve health care for these individuals.

  18. Changing guards: time to move beyond body mass index for population monitoring of excess adiposity.

    Science.gov (United States)

    Tanamas, S K; Lean, M E J; Combet, E; Vlassopoulos, A; Zimmet, P Z; Peeters, A

    2016-07-01

    With the obesity epidemic, and the effects of aging populations, human phenotypes have changed over two generations, possibly more dramatically than in other species previously. As obesity is an important and growing hazard for population health, we recommend a systematic evaluation of the optimal measure(s) for population-level excess body fat. Ideal measure(s) for monitoring body composition and obesity should be simple, as accurate and sensitive as possible, and provide good categorization of related health risks. Combinations of anthropometric markers or predictive equations may facilitate better use of anthropometric data than single measures to estimate body composition for populations. Here, we provide new evidence that increasing proportions of aging populations are at high health-risk according to waist circumference, but not body mass index (BMI), so continued use of BMI as the principal population-level measure substantially underestimates the health-burden from excess adiposity.

  19. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  20. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  1. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    Institute of Scientific and Technical Information of China (English)

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  2. 移动机器人车载摄像机位姿的高精度快速求解%An accurate and fast pose estimation algorithm foron-board camera of mobile robot

    Institute of Scientific and Technical Information of China (English)

    唐庆顺; 吴春富; 李国栋; 王小龙; 周风余

    2015-01-01

    An accurate and fast pose estimation problem for on-board camera of mobile robot is investigated.Firstly the special properties of the pose for on-board camera of mobile robot are analyzed.Secondly,an auxiliary rotation matrix is constructed using the on-board camera’s equivalent rotation axis,which is utilized to turn the initial essential matrix and homography matrix into a simplified kind that can be decomposed through elementary mathematical operations.Fi-nally,some simulation experiments are designed to verify the algorithm’s rapidity,accuracy and robustness.The ex-perimental results show that compared to traditional algorithms,the proposed algorithm can acquire higher accuracy and faster calculating speed,together with the robustness to the disturbance of the on-board camera’s equivalent rotation ax-is.In addition,the number of possible solutions are reduced one half,and the unique rotation angle of the mobile robot can be determined except for the condition that the 3 D planar scene structure and the ground are perpendicular,which can provide great convenience for controlling the pose of the mobile robot.%在分析移动机器人车载摄像机位姿的特殊性质的基础上,根据摄像机的等效转轴构造辅助旋转矩阵,利用该旋转矩阵将原始待分解本质矩阵和单应矩阵转换为一类简单的、可通过初等数学运算进行分解的本质矩阵和单应矩阵。仿真实验的结果表明,该车载摄像机位姿估计算法较传统方法具有更高的精度和更快的运算速度,对摄像机等效转轴的扰动也具有很好的鲁棒性。此外,分解出的可能解的数目较传统算法减少了一半,且在除诱导单应阵的空间景物平面与地面垂直的情况下,均能直接得到移动机器人的唯一转角,为移动机器人姿态控制提供了极大的便利。

  3. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  4. Accurate Finite Difference Algorithms

    Science.gov (United States)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Student estimations of peer alcohol consumption

    DEFF Research Database (Denmark)

    Stock, Christiane; Mcalaney, John; Pischke, Claudia;

    2014-01-01

    BACKGROUND: The Social Norms Approach, with its focus on positive behaviour and its consensus orientation, is a health promotion intervention of relevance to the context of a Health Promoting University. In particular, the approach could assist with addressing excessive alcohol consumption. AIM......: This article aims to discuss the link between the Social Norms Approach and the Health Promoting University, and analyse estimations of peer alcohol consumption among European university students. METHODS: A total of 4392 students from universities in six European countries and Turkey were asked to report...... their own typical alcohol consumption per day and to estimate the same for their peers of same sex. Students were classified as accurate or inaccurate estimators of peer alcohol consumption. Socio-demographic factors and personal alcohol consumption were examined as predictors for an accurate estimation...

  6. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...... of antipronation shoes or insoles, which latest was studied by Kulce DG., et al (2007). So far there have been no randomized controlled studies showing methods that the effect of this treatment has not been documented. Therefore the authors can measure the effect of treatments with insoles. Some of the excessive...

  7. Widespread Excess Ice in Arcadia Planitia, Mars

    CERN Document Server

    Bramson, Ali M; Putzig, Nathaniel E; Sutton, Sarah; Plaut, Jeffrey J; Brothers, T Charles; Holt, John W

    2015-01-01

    The distribution of subsurface water ice on Mars is a key constraint on past climate, while the volumetric concentration of buried ice (pore-filling versus excess) provides information about the process that led to its deposition. We investigate the subsurface of Arcadia Planitia by measuring the depth of terraces in simple impact craters and mapping a widespread subsurface reflection in radar sounding data. Assuming that the contrast in material strengths responsible for the terracing is the same dielectric interface that causes the radar reflection, we can combine these data to estimate the dielectric constant of the overlying material. We compare these results to a three-component dielectric mixing model to constrain composition. Our results indicate a widespread, decameters-thick layer that is excess water ice ~10^4 km^3 in volume. The accumulation and long-term preservation of this ice is a challenge for current Martian climate models.

  8. Does excessive pronation cause pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, R.G.; Rathleff, M.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... pronation patients recieve antipronation training often if the patient is in pain but wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  9. Does Excessive Pronation Cause Pain?

    DEFF Research Database (Denmark)

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M;

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...... is in pain but the effect of this treatment has not been documented. Therefore the authors wanted to investigate if it was possible to measure a change in foot posture after af given treatment.......Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist...

  10. Resolution of genetic map expansion caused by excess heterozygosity in plant recombinant inbred populations.

    Science.gov (United States)

    Truong, Sandra K; McCormick, Ryan F; Morishige, Daryl T; Mullet, John E

    2014-10-01

    Recombinant inbred populations of many plant species exhibit more heterozygosity than expected under the Mendelian model of segregation. This segregation distortion causes the overestimation of recombination frequencies and consequent genetic map expansion. Here we build upon existing genetic models of differential zygotic viability to model a heterozygote fitness term and calculate expected genotypic proportions in recombinant inbred populations propagated by selfing. We implement this model using the existing open-source genetic map construction code base for R/qtl to estimate recombination fractions. Finally, we show that accounting for excess heterozygosity in a sorghum recombinant inbred mapping population shrinks the genetic map by 213 cM (a 13% decrease corresponding to 4.26 fewer recombinations per meiosis). More accurate estimates of linkage benefit linkage-based analyses used in the identification and utilization of causal genetic variation. PMID:25128435

  11. EXCESS FUNCTIONS OF RAYS ON COMPLETE NONCOMPACT MANIFOLDS

    Institute of Scientific and Technical Information of China (English)

    徐森林; 徐栩

    2003-01-01

    This paper gives an estimate of excess functions of rays on complete noncompact manifolds.By using this estimation,the authors can get the results in [3] as corollaries,which asserts that a complete manifold is diffeomorphic to Rn under some curvature and pinching conditions.At last,they obtain a refinement of them with extra Ricci condition.

  12. Pricing Excess-of-loss Reinsurance Contracts Against Catastrophic Loss

    OpenAIRE

    J. David Cummins; Lewis, Christopher M.; Phillips, Richard D.

    1998-01-01

    This paper develops a pricing methodology and pricing estimates for the proposed Federal excess-of- loss (XOL) catastrophe reinsurance contracts. The contracts, proposed by the Clinton Administration, would provide per-occurrence excess-of-loss reinsurance coverage to private insurers and reinsurers, where both the coverage layer and the fixed payout of the contract are based on insurance industry losses, not company losses. In financial terms, the Federal government would be selling earthqua...

  13. Syndromes that Mimic an Excess of Mineralocorticoids.

    Science.gov (United States)

    Sabbadin, Chiara; Armanini, Decio

    2016-09-01

    Pseudohyperaldosteronism is characterized by a clinical picture of hyperaldosteronism with suppression of renin and aldosterone. It can be due to endogenous or exogenous substances that mimic the effector mechanisms of aldosterone, leading not only to alterations of electrolytes and hypertension, but also to an increased inflammatory reaction in several tissues. Enzymatic defects of adrenal steroidogenesis (deficiency of 17α-hydroxylase and 11β-hydroxylase), mutations of mineralocorticoid receptor (MR) and alterations of expression or saturation of 11-hydroxysteroid dehydrogenase type 2 (apparent mineralocorticoid excess syndrome, Cushing's syndrome, excessive intake of licorice, grapefruits or carbenoxolone) are the main causes of pseudohyperaldosteronism. In these cases treatment with dexamethasone and/or MR-blockers is useful not only to normalize blood pressure and electrolytes, but also to prevent the deleterious effects of prolonged over-activation of MR in epithelial and non-epithelial tissues. Genetic alterations of the sodium channel (Liddle's syndrome) or of the sodium-chloride co-transporter (Gordon's syndrome) cause abnormal sodium and water reabsorption in the distal renal tubules and hypertension. Treatment with amiloride and thiazide diuretics can respectively reverse the clinical picture and the renin aldosterone system. Finally, many other more common situations can lead to an acquired pseudohyperaldosteronism, like the expansion of volume due to exaggerated water and/or sodium intake, and the use of drugs, as contraceptives, corticosteroids, β-adrenergic agonists and FANS. In conclusion, syndromes or situations that mimic aldosterone excess are not rare and an accurate personal and pharmacological history is mandatory for a correct diagnosis and avoiding unnecessary tests and mistreatments. PMID:27251484

  14. Excessive or unwanted hair in women

    Science.gov (United States)

    Hypertrichosis; Hirsutism; Hair - excessive (women); Excessive hair in women; Hair - women - excessive or unwanted ... Women normally produce low levels of male hormones (androgens). If your body makes too much of this ...

  15. Determination of Enantiomeric Excess of Glutamic Acids by Lab-made Capillary Array Electrophoresis

    Institute of Scientific and Technical Information of China (English)

    Jun WANG; Kai Ying LIU; Li WANG; Ji Ling BAI

    2006-01-01

    Simulated enantiomeric excess of glutamic acid was determined by a lab-made sixteen-channel capillary array electrophoresis with confocal fluorescent rotary scanner. The experimental results indicated that the capillary array electrophoresis method can accurately determine the enantiomeric excess of glutamic acid and can be used for high-throughput screening system for combinatorial asymmetric catalysis.

  16. Severe rhabdomyolysis after excessive bodybuilding.

    Science.gov (United States)

    Finsterer, J; Zuntner, G; Fuchs, M; Weinberger, A

    2007-12-01

    A 46-year-old male subject performed excessive physical exertion during 4-6 h in a studio for body builders during 5 days. He was not practicing sport prior to this training and denied the use of any aiding substances. Despite muscle aching already after 1 day, he continued the exercises. After the last day, he recognized tiredness and cessation of urine production. Two days after discontinuation of the training, a Herpes simplex infection occurred. Because of acute renal failure, he required hemodialysis. There were absent tendon reflexes and creatine kinase (CK) values up to 208 274 U/L (normal: <170 U/L). After 2 weeks, CK had almost normalized and, after 4 weeks, hemodialysis was discontinued. Excessive muscle training may result in severe, hemodialysis-dependent rhabdomyolysis. Triggering factors may be prior low fitness level, viral infection, or subclinical metabolic myopathy.

  17. Diphoton excess through dark mediators

    Science.gov (United States)

    Chen, Chien-Yi; Lefebvre, Michel; Pospelov, Maxim; Zhong, Yi-Ming

    2016-07-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated e + e - pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: gg → S → A ' A ' → ( e + e -)( e + e -) and qoverline{q}to {Z}^'to sato ({e}+{e}-)({e}+{e}-) , where at the first step a heavy scalar, S, or vector, Z ', resonances are produced that decay to light metastable vectors, A ', or (pseudo-)scalars, s and a. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heavy resonances in order to derive the expected lifetimes and couplings of metastable light resonances. We observe that in the case of A ', the suggested range of masses and mixing angles ɛ is within reach of several new-generation intensity frontier experiments.

  18. Diphoton Excess through Dark Mediators

    CERN Document Server

    Chen, Chien-Yi; Pospelov, Maxim; Zhong, Yi-Ming

    2016-01-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated $e^+e^-$ pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: $gg \\to S\\to A'A'\\to (e^+e^-)(e^+e^-)$ and $q\\bar q \\to Z' \\to sa\\to (e^+e^-)(e^+e^-)$, where at the first step a heavy scalar, $S$, or vector, $Z'$, resonances are produced that decay to light metastable vectors, $A'$, or (pseudo-)scalars, $s$ and $a$. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heav...

  19. 12 CFR 925.23 - Excess stock.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Excess stock. 925.23 Section 925.23 Banks and... BANKS Stock Requirements § 925.23 Excess stock. (a) Sale of excess stock. Subject to the restriction in paragraph (b) of this section, a member may purchase excess stock as long as the purchase is approved by...

  20. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  1. Excess Early Mortality in Schizophrenia

    DEFF Research Database (Denmark)

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality...... as well as possible ways to reduce it. Persons with schizophrenia have an exceptionally short life expectancy. High mortality is found in all age groups, resulting in a life expectancy of approximately 20 years below that of the general population. Evidence suggests that persons with schizophrenia may...

  2. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  3. Evaluation of Excess Thermodynamic Parameters in a Binary Liquid Mixture (Cyclohexane + O-Xylene) at Different Temperatures

    OpenAIRE

    K. Narendra; Narayanamurthy, P.; CH. Srinivasu

    2010-01-01

    The ultrasonic velocity, density and viscosity in binary liquid mixture cyclohexane with o-xylene have been determined at different temperatures from 303.15 to 318.15 K over the whole composition range. The data have been utilized to estimate the excess adiabatic compressibility (βE), excess volumes (VE), excess intermolecular free length (LfE), excess internal pressure (πE) and excess enthalpy (HE) at the above temperatures. The excess values have been found to be useful in estimating the st...

  4. Outflows in Sodium Excess Objects

    CERN Document Server

    Park, Jongwon; Yi, Sukyoung K

    2015-01-01

    van Dokkum and Conroy revisited the unexpectedly strong Na I lines at 8200 A found in some giant elliptical galaxies and interpreted it as evidence for unusually bottom-heavy initial mass function. Jeong et al. later found a large population of galaxies showing equally-extraordinary Na D doublet absorption lines at 5900 A (Na D excess objects: NEOs) and showed that their origins can be different for different types of galaxies. While a Na D excess seems to be related with the interstellar medium (ISM) in late-type galaxies, smooth-looking early-type NEOs show little or no dust extinction and hence no compelling sign of ISM contributions. To further test this finding, we measured the doppler components in the Na D lines. We hypothesized that ISM would have a better (albeit not definite) chance of showing a blueshift doppler departure from the bulk of the stellar population due to outflow caused by either star formation or AGN activities. Many of the late-type NEOs clearly show blueshift in their Na D lines, wh...

  5. Excess electron transport in cryoobjects

    CERN Document Server

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  6. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    NARCIS (Netherlands)

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  7. THE IMPORTANCE OF THE STANDARD SAMPLE FOR ACCURATE ESTIMATION OF THE CONCENTRATION OF NET ENERGY FOR LACTATION IN FEEDS ON THE BASIS OF GAS PRODUCED DURING THE INCUBATION OF SAMPLES WITH RUMEN LIQUOR

    Directory of Open Access Journals (Sweden)

    T ŽNIDARŠIČ

    2003-10-01

    Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.

  8. Pathology of growth hormone excess.

    Science.gov (United States)

    Kovacs, K

    1988-09-01

    This paper briefly reviews the pathology of growth hormone excess. Prolonged oversecretion of growth hormone is associated with elevated serum growth hormone as well as somatomedian C levels and the clinical signs and symptoms of acromegaly or gigantism. Morphologic studies, including immunohistochemistry and electron microscopy, revealed that several distinct morphologic lesions can be present in the pituitary gland of patients with acromegaly or gigantism. Although substantial progress has been achieved during the last two decades, more work is required to correlate the morphologic features of adenoma cells with their biologic behavior. We feel that the future can be viewed with optimism and further exciting results can be expected by the interaction of pathologists, clinical endocrinologists and basic scientists. PMID:3070506

  9. Excess water dynamics in hydrotalcite: QENS study

    Indian Academy of Sciences (India)

    S Mitra; A Pramanik; D Chakrabarty; R Mukhopadhyay

    2004-08-01

    Results of the quasi-elastic neutron scattering (QENS) measurements on the dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random translational jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the change in binding of the water molecules to the host layer.

  10. 7 CFR 985.56 - Excess oil.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified...

  11. 10 CFR 904.10 - Excess energy.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  12. A Discussion on Mean Excess Plots

    CERN Document Server

    Ghosh, Souvik

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a Generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  13. A Discussion on Mean Excess Plots

    OpenAIRE

    Ghosh, Souvik; Resnick, Sidney I.

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  14. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Energy Technology Data Exchange (ETDEWEB)

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in [Department of Chemistry, Indian Institute of Technology Delhi, New Delhi 110016 (India); Nguyen, Andrew Huy; Molinero, Valeria [Department of Chemistry, University of Utah, Salt Lake City, Utah 84112-0850 (United States); Singh, Murari [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Khatua, Prabir; Bandyopadhyay, Sanjoy [Department of Chemistry, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India)

    2015-10-28

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip

  15. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Science.gov (United States)

    Dhabal, Debdas; Nguyen, Andrew Huy; Singh, Murari; Khatua, Prabir; Molinero, Valeria; Bandyopadhyay, Sanjoy; Chakravarty, Charusita

    2015-10-01

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a

  16. Phytoextraction of excess soil phosphorus

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  17. 75 FR 27572 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income

    Science.gov (United States)

    2010-05-17

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income AGENCY... subject proposal. Project owners are permitted to retain Excess Income for projects under terms and conditions established by HUD. Owners must request to retain some or all of their Excess Income. The...

  18. 31 CFR 205.24 - How are accurate estimates maintained?

    Science.gov (United States)

    2010-07-01

    ...) FISCAL SERVICE, DEPARTMENT OF THE TREASURY FINANCIAL MANAGEMENT SERVICE RULES AND PROCEDURES FOR EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a... justify in writing that it is not feasible to use a more efficient basis for determining the amount...

  19. Patterns of Excess Cancer Risk among the Atomic Bomb Survivors

    Science.gov (United States)

    Pierce, Donald A.

    1996-05-01

    I will indicate the major epidemiological findings regarding excess cancer among the atomic-bomb survivors, with some special attention to what can be said about low-dose risks. This will be based on 1950--90 mortality follow-up of about 87,000 survivors having individual radiation dose estimates. Of these about 50,000 had doses greater than 0.005 Sv, and the remainder serve largely as a comparison group. It is estimated that for this cohort there have been about 400 excess cancer deaths among a total of about 7800. Since there are about 37,000 subjects in the dose range .005--.20 Sv, there is substantial low-dose information in this study. The person-year-Seivert for the dose range under .20 Sv is greater than for any one of the 6 study cohorts of U.S., Canadian, and U.K. nuclear workers; and is equal to about 60% of the total for the combined cohorts. It is estimated, without linear extrapolation from higher doses, that for the RERF cohort there have been about 100 excess cancer deaths in the dose range under .20 Sv. Both the dose-response and age-time patterns of excess risk are very different for solid cancers and leukemia. One of the most important findings has been that the solid cancer (absolute) excess risk has steadily increased over the entire follow-up to date, similarly to the age-increase of the background risk. About 25% of the excess solid cancer deaths occurred in the last 5 years of the 1950--90 follow-up. On the contrary most of the excess leukemia risk occurred in the first few years following exposure. The observed dose response for solid cancers is very linear up to about 3 Sv, whereas for leukemia there is statistically significant upward curvature on that range. Very little has been proposed to explain this distinction. Although there is no hint of upward curvature or a threshold for solid cancers, the inherent difficulty of precisely estimating very small risks along with radiobiological observations that many radiation effects are nonlinear

  20. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  1. Estimation of excess numbers of viral diarrheal cases among children aged <5 years in Beijing with adjusted Serfling regression model%应用调整Serfling回归模型估计北京市5岁以下儿童病毒性腹泻相关超额病例数

    Institute of Scientific and Technical Information of China (English)

    贾蕾; 王小莉; 吴双胜; 马建新; 李洪军; 高志勇; 王全意

    2016-01-01

    Objective To estimate the excess numbers of viral diarrheal cases among children aged <5 years in Beijing from 1 January 2011 to 23 May 2015.Methods The excess numbers of diarrheal cases among the children aged <5 years were estimated by using weekly outpatient visit data from two children' s hospital in Beijing and adjusted Serfling regression model.Results The incidence peaks of viral diarrhea were during 8th-10th week and 40th-42nd week in 2011,40th-4th week in 2012,43rd-49th week in 2013 and 45th week in 2014 to 11th week in 2015 respectively.The excess numbers of viral diarrheal cases among children aged <5 years in the two children's hospital were 911(95%CI:261-1 561),1 998(95%CI:1 250-2 746),1 645 (95%CI:891-2 397),2 806(95%CI:1 938-3 674) and 1 822(95% CI:614-3 031) respectively,accounting for 40.38% (95% CI:11.57%-69.19%),44.21%(95%CI:27.66%-60.77%),45.08%(95%CI:24.42%-65.69%),60.87% (95%CI:42.04%-79.70%) and 66.62% (95%CI:22.45%-110.82%) of total outpatient visits due to diarrhea during 2011-2015,respectively.Totally,the excess number of viral diarrheal cases among children aged <5 years in Beijing was estimated to be 18 731(95%CI:10 106-27 354) from 2011 to 23 May 2015.Conclusions Winter is the season of viral diarrhea for children aged <5 years.The adjusted Serfling regression model analysis suggested that close attention should be paid to the etiologic variation of viruses causing acute gastroenteritis,especially the etiologic variation of norovirus.%目的 构建调整Serf ling回归模型,估计北京市2011年至2015年5月23日病毒性腹泻相关<5岁儿童超额腹泻病例数.方法 利用北京市2家儿童专科医院<5岁儿童急性腹泻周就诊病例数,拟合调整Serf ling回归模型,估计其病毒性腹泻相关超额病例数.结果 北京市2011年第8~10周、40~ 42周,2012年第40~46周,2013年第43~49周,2014年第45周直到2015年第11

  2. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...

  3. High Foreign Exchange Reserves Fuel Excess Liquidity

    Institute of Scientific and Technical Information of China (English)

    唐双宁

    2008-01-01

    This article views China’s excess liquidity problem in the global context. It suggests that market mechanisms, cooperation between all parties involved, and liquidity diversion, be resorted to in order to tackle the problem of excessive liquidity. This article also points out that the top priority is to solve the major problems, such as the current account surplus, the sources of excessive liquidity, the shortage of capital in rural areas, and the cause of capital distribution imbalance.

  4. Age at menarche in schoolgirls with and without excess weight

    OpenAIRE

    Silvia D. Castilho; Luciana B. Nucci

    2015-01-01

    OBJECTIVE: To evaluate the age at menarche of girls, with or without weight excess, attending private and public schools in a city in Southeastern Brazil. METHODS: This was a cross-sectional study comparing the age at menarche of 750 girls from private schools with 921 students from public schools, aged between 7 and 18 years. The menarche was reported by the status quo method and age at menarche was estimated by logarithmic transformation. The girls were grouped according to body mass index ...

  5. Factors associated with excessive polypharmacy in older people

    OpenAIRE

    Walckiers, Denise; Van der Heyden, Johan; Tafforeau, Jean

    2015-01-01

    Background Older people are a growing population. They live longer, but often have multiple chronic diseases. As a consequence, they are taking many different kind of medicines, while their vulnerability to pharmaceutical products is increased. The objective of this study is to describe the medicine utilization pattern in people aged 65 years and older in Belgium, and to estimate the prevalence and the determinants of excessive polypharmacy. Methods Data were used from the Belgian Health Inte...

  6. Initial report on characterization of excess highly enriched uranium

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  7. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  8. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Science.gov (United States)

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy. PMID:26720281

  9. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Science.gov (United States)

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy.

  10. Factors Associated With High Sodium Intake Based on Estimated 24-Hour Urinary Sodium Excretion

    OpenAIRE

    Hong, Jae Won; Noh, Jung Hyun; Kim, Dong-Jun

    2016-01-01

    Abstract Although reducing dietary salt consumption is the most cost-effective strategy for preventing progression of cardiovascular and renal disease, policy-based approaches to monitor sodium intake accurately and the understanding factors associated with excessive sodium intake for the improvement of public health are lacking. We investigated factors associated with high sodium intake based on the estimated 24-hour urinary sodium excretion, using data from the 2009 to 2011 Korea National H...

  11. Profitable capitation requires accurate costing.

    Science.gov (United States)

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  12. The excessively crying infant : etiology and treatment

    NARCIS (Netherlands)

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause i

  13. 11 CFR 9012.1 - Excessive expenses.

    Science.gov (United States)

    2010-01-01

    ... FINANCING UNAUTHORIZED EXPENDITURES AND CONTRIBUTIONS § 9012.1 Excessive expenses. (a) It shall be unlawful... expenses in excess of the aggregate payments to which the eligible candidates of a major party are entitled under 11 CFR part 9004 with respect to such election. (b) It shall be unlawful for the...

  14. Excessive internet use among European children

    OpenAIRE

    Smahel, David; Helsper, Ellen; Green, Lelia; Kalmus, Veronika; Blinka, Lukas; Ólafsson, Kjartan

    2012-01-01

    This report presents new findings and further analysis of the EU Kids Online 25 country survey regarding excessive use of the internet by children. It shows that while a number of children (29%) have experienced one or more of the five components associated with excessive internet use, very few (1%) can be said to show pathological levels of use.

  15. Excessive libido in a woman with rabies.

    OpenAIRE

    Dutta, J. K.

    1996-01-01

    Rabies is endemic in India in both wildlife and humans. Human rabies kills 25,000 to 30,000 persons every year. Several types of sexual manifestations including excessive libido may develop in cases of human rabies. A laboratory proven case of rabies in an Indian woman who manifested excessive libido is presented below. She later developed hydrophobia and died.

  16. Triboson interpretations of the ATLAS diboson excess

    CERN Document Server

    Aguilar-Saavedra, J A

    2015-01-01

    The ATLAS excess in fat jet pair production is kinematically compatible with the decay of a heavy resonance into two gauge bosons plus an extra particle. This possibility would explain the absence of such a localised excess in the analogous CMS analysis of fat dijet final states, as well as the negative results of diboson resonance searches in the semi-leptonic decay modes.

  17. Thermophysical and excess properties of hydroxamic acids in DMSO

    International Nuclear Information System (INIS)

    Graphical abstract: Excess molar volumes (VE) vs mole fraction (x2) of (A) N-o-tolyl-2-nitrobenzo- and (B) N-o-tolyl-4-nitrobenzo-hydroxamic acids in DMSO at different temperatures: ■, 298.15 K; ▪, 303.15 K; ▪, 308.15 K; ▪, 313.15 K; and ▪, 318.15 K. Highlights: ► ρ, n of the system hydroxamic acids in DMSO are reported. ► Apparent molar volume indicates superior solute–solvent interactions. ► Limiting apparent molar expansibility and coefficient of thermal expansion. ► Behaviour of this parameter suggest to hydroxamic acids act as structure maker. ► The excess properties have interpreted in terms of molecular interactions. -- Abstract: In this work, densities (ρ) and refractive indices (n) of N-o-tolyl-2-nitrobenzo- and N-o-tolyl-4-nitrobenzo-, hydroxamic acids have been determined for dimethyl sulfoxide (DMSO) as a function of their concentrations at T = (298.15, 303.15, 308.15, 313.15, and 318.15) K. These measurements were carried out to evaluate some important parameters, viz, molar volume (V), apparent molar volume (Vϕ), limiting apparent molar volume (Vϕ0), slope (SV∗), molar refraction (RM) and polarizability (α). The related parameters determined are limiting apparent molar expansivity (ϕE0), thermal expansion coefficient (α2) and the Hepler constant (∂2Vϕ0/∂T2). Excess properties such as excess molar volume (VE), deviations from the additivity rule of refractive index (nE), excess molar refraction (RME) have also been evaluated. The excess properties were fitted to the Redlich–Kister equations to estimate their coefficients and standard deviations were determined. The variations of these excess parameters with composition were discussed from the viewpoint of intermolecular interactions in these solutions. The excess properties are found to be either positive or negative depending on the molecular interactions and the nature of solutions. Further, these parameters have been interpreted in terms of solute

  18. Parameters for accurate genome alignment

    Directory of Open Access Journals (Sweden)

    Hamada Michiaki

    2010-02-01

    Full Text Available Abstract Background Genome sequence alignments form the basis of much research. Genome alignment depends on various mundane but critical choices, such as how to mask repeats and which score parameters to use. Surprisingly, there has been no large-scale assessment of these choices using real genomic data. Moreover, rigorous procedures to control the rate of spurious alignment have not been employed. Results We have assessed 495 combinations of score parameters for alignment of animal, plant, and fungal genomes. As our gold-standard of accuracy, we used genome alignments implied by multiple alignments of proteins and of structural RNAs. We found the HOXD scoring schemes underlying alignments in the UCSC genome database to be far from optimal, and suggest better parameters. Higher values of the X-drop parameter are not always better. E-values accurately indicate the rate of spurious alignment, but only if tandem repeats are masked in a non-standard way. Finally, we show that γ-centroid (probabilistic alignment can find highly reliable subsets of aligned bases. Conclusions These results enable more accurate genome alignment, with reliability measures for local alignments and for individual aligned bases. This study was made possible by our new software, LAST, which can align vertebrate genomes in a few hours http://last.cbrc.jp/.

  19. Excessive crying in infants with regulatory disorders.

    Science.gov (United States)

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents.

  20. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Science.gov (United States)

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  1. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. Toward Accurate and Quantitative Comparative Metagenomics.

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. Genetics Home Reference: aromatase excess syndrome

    Science.gov (United States)

    ... males, the increased aromatase and subsequent conversion of androgens to estrogen are responsible for the gynecomastia and limited bone growth characteristic of aromatase excess syndrome . Increased estrogen in females can cause symptoms ...

  4. Controlling police (excessive force: The American case

    Directory of Open Access Journals (Sweden)

    Zakir Gül

    2013-09-01

    Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.

  5. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  6. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  7. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  8. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  9. Romanian welfare state between excess and failure

    Directory of Open Access Journals (Sweden)

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  10. Geometrical expression of excess entropy production.

    Science.gov (United States)

    Sagawa, Takahiro; Hayakawa, Hisao

    2011-11-01

    We derive a geometrical expression of the excess entropy production for quasistatic transitions between nonequilibrium steady states of Markovian jump processes, which can be exactly applied to nonlinear and nonequilibrium situations. The obtained expression is geometrical; the excess entropy production depends only on a trajectory in the parameter space, analogous to the Berry phase in quantum mechanics. Our results imply that vector potentials are needed to construct the thermodynamics of nonequilibrium steady states. PMID:22181372

  11. Towards an accurate bioimpedance identification

    OpenAIRE

    Sánchez Terrones, Benjamín; Louarroudi, E.; Bragós Bardia, Ramon; Pintelon, Rik

    2013-01-01

    This paper describes the local polynomial method (LPM) for estimating the time- invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identi cation framework and compare it with the traditional cross and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both t...

  12. Towards an accurate bioimpedance identification

    Science.gov (United States)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  13. Towards an accurate bioimpedance identification

    International Nuclear Information System (INIS)

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  14. ESTIMATING IRRIGATION COSTS

    Science.gov (United States)

    Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...

  15. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  16. New vector bosons and the diphoton excess

    Directory of Open Access Journals (Sweden)

    Jorge de Blas

    2016-08-01

    Full Text Available We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely explained in a simplified model containing just φ and W′. On the other hand, if W′ does not carry color, we show that, provided additional colored particles exist to generate the required φ to gluon coupling, the diphoton excess could be explained by the same W′ commonly invoked to explain the diboson excess at ∼2 TeV. We also explore possible connections between the diphoton and diboson excesses with the anomalous tt¯ forward–backward asymmetry.

  17. Antidepressant induced excessive yawning and indifference

    Directory of Open Access Journals (Sweden)

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  18. Phenomenology and psychopathology of excessive indoor tanning.

    Science.gov (United States)

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning. PMID:24601904

  19. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  20. The Origin of the 24-micron Excess in Red Galaxies

    CERN Document Server

    Brand, Kate; Armus, Lee; Assef, Roberto J; Brown, Michael J I; Cool, Richard R; Desai, Vandana; Dey, Arjun; Floc'h, Emeric Le; Jannuzi, Buell T; Kochanek, Christopher S; Melbourne, Jason; Papovich, Casey J; Soifer, B T

    2008-01-01

    Observations with the Spitzer Space Telescope have revealed a population of red-sequence galaxies with a significant excess in their 24-micron emission compared to what is expected from an old stellar population. We identify 900 red galaxies with 0.150.3 mJy). We determine the prevalence of AGN and star-formation activity in all the AGES galaxies using optical line diagnostics and mid-IR color-color criteria. Using the IRAC color-color diagram from the Spitzer Shallow Survey, we find that 64% of the 24-micron excess red galaxies are likely to have strong PAH emission features in the 8-micron IRAC band. This fraction is significantly larger than the 5% of red galaxies with f24<0.3 mJy that are estimated to have strong PAH emission, suggesting that the infrared emission is largely due to star-formation processes. Only 15% of the 24-micron excess red galaxies have optical line diagnostics characteristic of star-formation (64% are classified as AGN and 21% are unclassifiable). The difference between the optica...

  1. Same-Sign Dilepton Excesses and Vector-like Quarks

    CERN Document Server

    Chen, Chuan-Ren; Low, Ian

    2015-01-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  2. Minimal Dilaton Model and the Diphoton Excess

    CERN Document Server

    Agarwal, Bakul; Mohan, Kirtimaan A

    2016-01-01

    In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.

  3. Effects of Excess Cash, Board Attributes and Insider Ownership on Firm Value: Evidence from Pakistan

    OpenAIRE

    Nadeem Ahmed Sheikh; Muhammad Imran Khan

    2016-01-01

    The purpose of this paper is to investigate whether excess cash, board attributes (i.e. board size, board independence and CEO duality) and insider ownership affects the value of the firm. Data were taken from annual reports of non-financial firms listed on the Karachi Stock Exchange (KSE) Pakistan during 2008-2012. Pooled ordinary least squares method used to estimate the effects of excess cash and internal governance indicators on the value of the firm. Our results indicate that...

  4. Mortality attributable to excess body mass Index in Iran: Implementation of the comparative risk assessment methodology

    OpenAIRE

    Shirin Djalalinia; Sahar Saeedi Moghaddam; Niloofar Peykari; Amir Kasaeian; Ali Sheidaei; Anita Mansouri; Younes Mohammadi; Mahboubeh Parsaeian; Parinaz Mehdipour; Bagher Larijani; Farshad Farzadfar

    2015-01-01

    Background: The prevalence of obesity continues to rise worldwide with alarming rates in most of the world countries. Our aim was to compare the mortality of fatal disease attributable to excess body mass index (BMI) in Iran in 2005 and 2011. Methods: Using standards implementation comparative risk assessment methodology, we estimated mortality attributable to excess BMI in Iranian adults of 25-65 years old, at the national and sub-national levels for 9 attributable outcomes including; is...

  5. Median ages at stages of sexual maturity and excess weight in school children

    OpenAIRE

    Luciano, Alexandre P; Benedet, Jucemar; de Abreu, Luiz Carlos; Valenti, Vitor E; de Souza Almeida, Fernando; de Vasconcelos, Francisco AG; Adami, Fernando

    2013-01-01

    Background We aimed to estimate the median ages at specific stages of sexual maturity stratified by excess weight in boys and girls. Materials and method This was a cross-sectional study made in 2007 in Florianopolis, Brazil, with 2,339 schoolchildren between 8 to 14 years of age (1,107 boys) selected at random in two steps (by region and type of school). The schoolchildren were divided into: i) those with excess weight and ii) those without excess weight, according to the WHO 2007 cut-off po...

  6. Excess mortality in giant cell arteritis

    DEFF Research Database (Denmark)

    Bisgård, C; Sloth, H; Keiding, Niels;

    1991-01-01

    A 13-year departmental sample of 34 patients with definite (biopsy-verified) giant cell arteritis (GCA) was reviewed. The mortality of this material was compared to sex-, age- and time-specific death rates in the Danish population. The standardized mortality ratio (SMR) was 1.8 (95% confidence...... with respect to SMR, sex distribution or age. In the group of patients with department-diagnosed GCA (definite + probable = 180 patients), the 95% confidence interval for the SMR of the women included 1.0. In all other subgroups there was a significant excess mortality. Excess mortality has been found in two...

  7. New vector bosons and the diphoton excess

    OpenAIRE

    Jorge de Blas; José Santiago; Roberto Vega-Morales(Laboratoire de Physique Théorique d’Orsay, UMR8627-CNRS, Université Paris-Sud 11, F-91405 Orsay Cedex, France)

    2016-01-01

    We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ) to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely expla...

  8. Diphoton Excess as a Hidden Monopole

    CERN Document Server

    Yamada, Masaki; Yonekura, Kazuya

    2016-01-01

    We provide a theory with a monopole of a strongly-interacting hidden U(1) gauge symmetry that can explain the 750-GeV diphoton excess reported by ATLAS and CMS. The excess results from the resonance of monopole, which is produced via gluon fusion and decays into two photons. In the low energy, there are only mesons and a monopole in our model because any baryons cannot be gauge invariant in terms of strongly interacting Abelian symmetry. This is advantageous of our model because there is no unwanted relics around the BBN epoch.

  9. Software Cost Estimation Review

    OpenAIRE

    Ongere, Alphonce

    2013-01-01

    Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...

  10. Infrared excesses in early-type stars - Gamma Cassiopeiae

    Science.gov (United States)

    Scargle, J. D.; Erickson, E. F.; Witteborn, F. C.; Strecker, D. W.

    1978-01-01

    Spectrophotometry of the classical Be star Gamma Cas (1-4 microns, with about 2% spectral resolution) is presented. These data, together with existing broad-band observations, are accurately described by simple isothermal LTE models for the IR excess which differ from most previously published work in three ways: (1) hydrogenic bound-free emission is included; (2) the attenuation of the star by the shell is included; and (3) no assumption is made that the shell contribution is negligible in some bandpass. It is demonstrated that the bulk of the IR excess consists of hydrogenic bound-free and free-free emission from a shell of hot ionized hydrogen gas, although a small thermal component cannot be ruled out. The bound-free emission is strong, and the Balmer, Paschen, and Brackett discontinuities are correctly represented by the shell model with physical parameters as follows: a shell temperature of approximately 18,000 K, an optical depth (at 1 micron) of about 0.5, an electron density of approximately 1 trillion per cu cm, and a size of about 2 trillion cm. Phantom shells (i.e., ones which do not alter the observed spectrum of the underlying star) are discussed.

  11. Low excess air operations of oil boilers

    Energy Technology Data Exchange (ETDEWEB)

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin [Brookhaven National Labs., Upton, NY (United States)

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  12. Excessive nitrogen and phosphorus in European rivers

    NARCIS (Netherlands)

    Blaas, Harry; Kroeze, Carolien

    2016-01-01

    Rivers export nutrients to coastal waters. Excess nutrient export may result in harmful algal blooms and hypoxia, affecting biodiversity, fisheries, and recreation. The purpose of this study is to quantify for European rivers (1) the extent to which N and P loads exceed levels that minimize the r

  13. 34 CFR 300.16 - Excess costs.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Excess costs. 300.16 Section 300.16 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION AND REHABILITATIVE SERVICES, DEPARTMENT OF EDUCATION ASSISTANCE TO STATES FOR THE EDUCATION OF CHILDREN...

  14. ORIGIN OF EXCESS (176)Hf IN METEORITES

    DEFF Research Database (Denmark)

    Thrane, Kristine; Connelly, James Norman; Bizzarro, Martin;

    2010-01-01

    After considerable controversy regarding the (176)Lu decay constant (lambda(176)Lu), there is now widespread agreement that (1.867 +/- 0.008) x 10(-11) yr(-1) as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the (176)Hf excesses that are correlated with...

  15. Excessive prices as abuse of dominance?

    DEFF Research Database (Denmark)

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  16. 34 CFR 668.166 - Excess cash.

    Science.gov (United States)

    2010-07-01

    ... funds that an institution receives from the Secretary under the just-in-time payment method. (b) Excess...; and (2) Providing funds to the institution under the reimbursement payment method or cash monitoring payment method described in § 668.163(d) and (e), respectively. (Authority: 20 U.S.C. 1094)...

  17. 43 CFR 426.12 - Excess land.

    Science.gov (United States)

    2010-10-01

    ... irrigation water by entering into a recordable contract with the United States if the landowner qualifies... irrigation water because the landowner becomes subject to the discretionary provisions as provided in § 426.3... this section; or (iii) The excess land becomes eligible to receive irrigation water as a result...

  18. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. The NANOGrav Nine-Year Data Set: Excess Noise in Millisecond Pulsar Arrival Times

    CERN Document Server

    Lam, M T; Chatterjee, S; Arzoumanian, Z; Crowter, K; Demorest, P B; Dolch, T; Ellis, J A; Ferdman, R D; Fonseca, E; Gonzalez, M E; Jones, G; Jones, M L; Levin, L; Madison, D R; McLaughlin, M A; Nice, D J; Pennucci, T T; Ransom, S M; Shannon, R M; Siemens, X; Stairs, I H; Stovall, K; Swiggum, J K; Zhu, W W

    2016-01-01

    Gravitational wave astronomy using a pulsar timing array requires high-quality millisecond pulsars, correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for millisecond pulsars observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar and we demonstrate that the excess noise has a red power spectrum for 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and freq...

  20. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  1. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every catchment of...

  2. Excess plutonium disposition: The deep borehole option

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, K.L.

    1994-08-09

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified.

  3. Relationship Between Thermal Tides and Radius Excess

    CERN Document Server

    Socrates, Aristotle

    2013-01-01

    Close-in extrasolar gas giants -- the hot Jupiters -- display departures in radius above the zero-temperature solution, the radius excess, that are anomalously high. The radius excess of hot Jupiters follows a relatively close relation with thermal tidal tidal torques and holds for ~ 4-5 orders of magnitude in a characteristic thermal tidal power in such a way that is consistent with basic theoretical expectations. The relation suggests that thermal tidal torques determine the global thermodynamic and spin state of the hot Jupiters. On empirical grounds, it is shown that theories of hot Jupiter inflation that invoke a constant fraction of the stellar flux to be deposited at great depth are, essentially, falsified.

  4. ILLUSION OF EXCESSIVE CONSUMPTION AND ITS EFFECTS

    Directory of Open Access Journals (Sweden)

    MUNGIU-PUPĂZAN MARIANA CLAUDIA

    2015-12-01

    Full Text Available The aim is to explore, explain and describe this phenomenon to a better understanding of it and also the relationship between advertising and the consumer society members. This paper aims to present an analysis of excessive and unsustainable consumption, the evolution of a phenomenon, and the ability to find a way to combat. Unfortunately, studies show that this tendency to accumulate more than we need to consume excess means that almost all civilizations fined and placed dogmatic among the values that children learn early in life. This has been perpetuated since the time when the goods or products does not get so easy as today. Anti-consumerism has emerged in response to this economic system, not on the long term. We are witnessing the last two decades to establish a new phase of consumer capitalism: society hiperconsumtion.

  5. A scalar hint from the diboson excess?

    CERN Document Server

    Cacciapaglia, Giacomo; Hashimoto, Michio

    2015-01-01

    In view of the recent diboson resonant excesses reported by both ATLAS and CMS, we suggest that a new weak singlet pseudo-scalar particle, $\\eta_{WZ}$, may decay into two weak bosons while being produced in gluon fusion at the LHC. The couplings to gauge bosons can arise as in the case of a Wess-Zumino-Witten anomaly term, thus providing an effective model in which the present observed excess in the diboson channel at the LHC can be studied in a well motivated phenomenological model. In models where the pseudo-scalar arises as a composite state, the coefficients of the anomalous couplings can be related to the fermion components of the underlying dynamics. We provide an example to test the feasibility of the idea.

  6. Excess credit and the South Korean crisis

    OpenAIRE

    Panicos O. Demetriades; Fattouh, Bassam A.

    2006-01-01

    We provide a novel empirical analysis of the South Korean credit market that reveals large volumes of excess credit since the late 1970s, indicating that a sizeable proportion of total credit was being used to refinance unprofitable projects. Our findings are consistent with theoretical literature that suggests that soft budget constraints and over-borrowing were significant factors behind the Korean financial crisis of 1997-98.

  7. IDENTIFICATION OF RIVER WATER EXCESSIVE POLLUTION SOURCES

    OpenAIRE

    K. J. KACHIASHVILI; D. I. MELIKDZHANIAN

    2006-01-01

    The program package for identification of river water excessive pollution sources located between two controlled cross-sections of the river is described in this paper. The software has been developed by the authors on the basis of mathematical models of pollutant transport in the rivers and statistical hypotheses checking methods. The identification algorithms were elaborated with the supposition that the pollution sources discharge different compositions of pollutants or (at the identical c...

  8. Pharmacotherapy of Excessive Sleepiness: Focus on Armodafinil

    OpenAIRE

    Michael Russo

    2009-01-01

    Excessive sleepiness (ES) is responsible for significant morbidity and mortality due to its association with cardiovascular disease, cognitive impairment, and occupational and transport accidents. ES is also detrimental to patients’ quality of life, as it affects work and academic performance, social interactions, and personal relationships. Armodafinil is the R-enantiomer of the established wakefulness-promoting agent modafinil, which is a racemic mixture of both the R- and S-enantiomers. R-...

  9. Search for bright stars with infrared excess

    International Nuclear Information System (INIS)

    Bright stars, stars with visual magnitude smaller than 6.5, can be studied using small telescope. In general, if stars are assumed as black body radiator, then the color in infrared (IR) region is usually equal to zero. Infrared data from IRAS observations at 12 and 25μm (micron) with good flux quality are used to search for bright stars (from Bright Stars Catalogues) with infrared excess. In magnitude scale, stars with IR excess is defined as stars with IR color m12−m25>0; where m12−m25 = −2.5log(F12/F25)+1.56, where F12 and F25 are flux density in Jansky at 12 and 25μm, respectively. Stars with similar spectral type are expected to have similar color. The existence of infrared excess in the same spectral type indicates the existence of circum-stellar dust, the origin of which is probably due to the remnant of pre main-sequence evolution during star formation or post AGB evolution or due to physical process such as the rotation of those stars

  10. Effects of Excess Cash, Board Attributes and Insider Ownership on Firm Value: Evidence from Pakistan

    Directory of Open Access Journals (Sweden)

    Nadeem Ahmed Sheikh

    2016-04-01

    Full Text Available The purpose of this paper is to investigate whether excess cash, board attributes (i.e. board size, board independence and CEO duality and insider ownership affects the value of the firm. Data were taken from annual reports of non-financial firms listed on the Karachi Stock Exchange (KSE Pakistan during 2008-2012. Pooled ordinary least squares method used to estimate the effects of excess cash and internal governance indicators on the value of the firm. Our results indicate that excess cash is significantly negatively related to firm value. Excess cash along with board size is significant and negatively related to firm value. Excess cash along with insider ownership is significant and negatively related to firm value. Control variables namely leverage and dividends are positively while firm size is negatively related to firm value in all regressions. In sum, empirical results indicate that excess cash, board size and insider ownership have material effects on the value of the firm. Moreover, findings of this study provide support to managers to understand the impact of excess cash and internal governance measures on firm value. In addition, findings provide support to regulatory authorities to frame regulations that improve the level of corporate governance in the country.

  11. The entropy excess and moment of inertia excess ratio with inclusion of statistical pairing fluctuations

    Science.gov (United States)

    Razavi, R.; Dehghani, V.

    2014-03-01

    The entropy excess of 163Dy compared to 162Dy as a function of nuclear temperature have been investigated using the mean value Bardeen-Cooper-Schrieffer (BCS) method based on application of the isothermal probability distribution function to take into account the statistical fluctuations. Then, the spin cut-off excess ratio (moment of inertia excess ratio) introduced by Razavi [Phys. Rev. C88 (2013) 014316] for proton and neutron system have been obtained and are compared with their corresponding data on the BCS model. The results show that the overall agreement between the BCS model and mean value BCS method is satisfactory and the mean value BCS model reduces fluctuations and washes out singularities. However, the expected constant value in the entropy excess is not reproduced by the mean value BCS method.

  12. Excess Mortality Attributable to Extreme Heat in New York City, 1997-2013.

    Science.gov (United States)

    Matte, Thomas D; Lane, Kathryn; Ito, Kazuhiko

    2016-01-01

    Extreme heat event excess mortality has been estimated statistically to assess impacts, evaluate heat emergency response, and project climate change risks. We estimated annual excess non-external-cause deaths associated with extreme heat events in New York City (NYC). Extreme heat events were defined as days meeting current National Weather Service forecast criteria for issuing heat advisories in NYC based on observed maximum daily heat index values from LaGuardia Airport. Outcomes were daily non-external-cause death counts for NYC residents from May through September from 1997 to 2013 (n = 337,162). The cumulative relative risk (CRR) of death associated with extreme heat events was estimated in a Poisson time-series model for each year using an unconstrained distributed lag for days 0-3 accommodating over dispersion, and adjusting for within-season trends and day of week. Attributable death counts were computed by year based on individual year CRRs. The pooled CRR per extreme heat event day was 1.11 (95%CI 1.08-1.14). The estimated annual excess non-external-cause deaths attributable to heat waves ranged from -14 to 358, with a median of 121. Point estimates of heat wave-attributable deaths were greater than 0 in all years but one and were correlated with the number of heat wave days (r = 0.81). Average excess non-external-cause deaths associated with extreme heat events were nearly 11-fold greater than hyperthermia deaths. Estimated extreme heat event-associated excess deaths may be a useful indicator of the impact of extreme heat events, but single-year estimates are currently too imprecise to identify short-term changes in risk. PMID:27081885

  13. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  14. An accurate and robust gyroscope-gased pedometer.

    Science.gov (United States)

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  15. Identifying opportunities to reduce excess nitrogen in croplands while maintaining current crop yields

    Science.gov (United States)

    West, P. C.; Mueller, N. D.; Foley, J. A.

    2011-12-01

    Use of synthetic nitrogen fertilizer has greatly contributed to the increased crop yields brought about by the Green Revolution. Unfortunately, it also has also contributed to substantial excess nitrogen in the environment. Application of excess nitrogen not only is a waste of energy and other resources used to produce, transport and apply it, it also pollutes aquatic ecosystems and has led to the development of more than 200 hypoxic-or "dead"-zones in coastal areas around the world. How can we decrease use of excess nitrogen without compromising crop yields? To help address this challenge, our study (1) quantified hot spots of excess nitrogen, and (2) estimated how much nitrogen reduction is possible in these areas while still maintaining yields. We estimated excess nitrogen for major crops using a mass balance approach and global spatial data sets of crop area and yield, fertilizer application rates, and nitrogen deposition. Hot spots of excess nitrogen were identified by quantifying the smallest area within large river basins that contributed 25% and 50% of the total load within each basin. Nitrogen reduction scenarios were developed using a yield response model to estimate nitrogen application rates needed to maintain current yields. Our research indicated that excess nitrogen is concentrated in very small portions of croplands within river basins, with 25% of the total nitrogen load in each basin from ~10% of the cropland, and 50% of the total nitrogen load in each basin from ~25% of the cropland. Targeting reductions in application rates in these hot spots can allow us to maintain current crop yields while greatly reducing nitrogen loading to coastal areas and creating the opportunity to reallocate resources to boost yields on nitrogen-limited croplands elsewhere.

  16. Excess mortality among the unmarried: a case study of Japan.

    Science.gov (United States)

    Goldman, N; Hu, Y

    1993-02-01

    Recent research has demonstrated that mortality patterns by marital status in Japan are different from corresponding patterns in other industrialized countries. Most notably, the magnitude of the excess mortality experienced by single Japanese has been staggering. For example, estimates of life expectancy for the mid-1900s indicate that single Japanese men and women had life expectancies between 15 and 20 years lower than their married counterparts. In addition, gender differences among single Japanese have been smaller than elsewhere, while those among divorced persons have been unanticipatedly large; and, the excess mortality of the Japanese single population has been decreasing over the past few decades in contrast to generally increasing differentials elsewhere. In this paper, we use a variety of data sources to explore several explanations for these unique mortality patterns in Japan. Undeniably, the traditional Japanese system of arranged marriages makes the process of selecting a spouse a significant factor. Evidence from anthropological studies and attitudinal surveys indicates that marriage is likely to have been and probably continues to be more selective with regard to underlying health characteristics in Japan than in other industrialized countries. However, causal explanations related to the importance of marriage and the family in Japanese society may also be responsible for the relatively high mortality experienced by singles and by divorced men.

  17. Excess mid-IR emission in Cataclysmic Variables

    CERN Document Server

    Dubus, G; Kern, B; Taam, R E; Spruit, H C

    2004-01-01

    We present a search for excess mid-IR emission due to circumbinary material in the orbital plane of cataclysmic variables (CVs). Our motivation stems from the fact that the strong braking exerted by a circumbinary (CB) disc on the binary system could explain several puzzles in our current understanding of CV evolution. Since theoretical estimates predict that the emission from a CB disc can dominate the spectral energy distribution (SED) of the system at wavelengths > 5 microns, we obtained simultaneous visible to mid-IR SEDs for eight systems. We report detections of SS Cyg at 11.7 microns and AE Aqr at 17.6 microns, both in excess of the contribution from the secondary star. In AE Aqr, the IR likely originates from synchrotron-emitting clouds propelled by the white dwarf. In SS Cyg, we argue that the observed mid-IR variability is difficult to reconcile with simple models of CB discs and we consider free-free emission from a wind. In the other systems, our mid-IR upper limits place strong constraints on the...

  18. Excess mortality associated with influenza epidemics in Portugal, 1980 to 2004.

    Directory of Open Access Journals (Sweden)

    Baltazar Nunes

    Full Text Available BACKGROUND: Influenza epidemics have a substantial impact on human health, by increasing the mortality from pneumonia and influenza, respiratory and circulatory diseases, and all causes. This paper provides estimates of excess mortality rates associated with influenza virus circulation for 7 causes of death and 8 age groups in Portugal during the period of 1980-2004. METHODOLOGY/PRINCIPAL FINDINGS: We compiled monthly mortality time series data by age for all-cause mortality, cerebrovascular diseases, ischemic heart diseases, diseases of the respiratory system, chronic respiratory diseases, pneumonia and influenza. We also used a control outcome, deaths from injuries. Age- and cause-specific baseline mortality was modelled by the ARIMA approach; excess deaths attributable to influenza were calculated by subtracting expected deaths from observed deaths during influenza epidemic periods. Influenza was associated with a seasonal average of 24.7 all-cause excess deaths per 100,000 inhabitants, approximately 90% of which were among seniors over 65 yrs. Excess mortality was 3-6 fold higher during seasons dominated by the A(H3N2 subtype than seasons dominated by A(H1N1/B. High excess mortality impact was also seen in children under the age of four years. Seasonal excess mortality rates from all the studied causes of death were highly correlated with each other (Pearson correlation range, 0.65 to 0.95, P0.64, P<0.05. By contrast, there was no correlation with excess mortality from injuries. CONCLUSIONS/SIGNIFICANCE: Our excess mortality approach is specific to influenza virus activity and produces influenza-related mortality rates for Portugal that are similar to those published for other countries. Our results indicate that all-cause excess mortality is a robust indicator of influenza burden in Portugal, and could be used to monitor the impact of influenza epidemics in this country. Additional studies are warranted to confirm these findings in other

  19. Excess-electron and excess-hole states of charged alkali halide clusters

    Science.gov (United States)

    Honea, Eric C.; Homer, Margie L.; Whetten, R. L.

    1990-12-01

    Charged alkali halide clusters from a He-cooled laser vaporization source have been used to investigate two distinct cluster states corresponding to the excess-electron and excess-hole states of the crystal. The production method is UV-laser vaporization of an alkali metal rod into a halogen-containing He flow stream, resulting in variable cluster composition and cooling sufficient to stabilize weakly bound forms. Detection of charged clusters is accomplished without subsequent ionization by pulsed-field time-of-flight mass spectrometry of the skimmed cluster beam. Three types of positively charged sodium fluoride cluster are observed, each corresponding to a distinct physical situation: NanF+n-1 (purely ionic form), Nann+1F+n-1 (excess-electron form), and NanF+n (excess-hole form). The purely ionic clusters exhibit an abundance pattern similar to that observed in sputtering and fragmentation experiments and are explained by the stability of completed cubic microlattice structures. The excess-electron clusters, in contrast, exhibit very strong abundance maxima at n = 13 and 22, corresponding to the all-odd series (2n + 1 = jxkxl;j,k,l odd). Their high relative stability is explained by the ease of Na(0) loss except when the excess electron localizes in a lattice site to complete a cuboid structure. These may correspond to the internal F-center state predicted earlier. A localized electron model incorporating structural simulation results as account for the observed pattern. The excess-hole clusters, which had been proposed as intermediates in the ionization-induced fragmentation of neutral AHCs, exhibit a smaller variation in stability, indicating that the hole might not be well localized.

  20. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Science.gov (United States)

    Ku, Kevin; Sue, Gloria R.

    2015-01-01

    In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol. PMID:26904700

  1. Diboson Excess from a New Strong Force

    OpenAIRE

    Georgi, Howard; Nakai, Yuichiro

    2016-01-01

    We explore a "partial unification" model that could explain the diphoton event excess around $750 \\, \\rm GeV$ recently reported by the LHC experiments. A new strong gauge group is combined with the ordinary color and hypercharge gauge groups. The VEV responsible for the combination is of the order of the $SU(2)\\times U(1)$ breaking scale, but the coupling of the new physics to standard model particles is suppressed by the strong interaction of the new gauge group. This simple extension of the...

  2. Di-photon excess illuminates dark matter

    OpenAIRE

    Backović, Mihailo; Mariotti, Alberto; Redigolo, Diego

    2016-01-01

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predic...

  3. Subcorneal hematomas in excessive video game play.

    Science.gov (United States)

    Lennox, Maria; Rizzo, Jason; Lennox, Luke; Rothman, Ilene

    2016-01-01

    We report a case of subcorneal hematomas caused by excessive video game play in a 19-year-old man. The hematomas occurred in a setting of thrombocytopenia secondary to induction chemotherapy for acute myeloid leukemia. It was concluded that thrombocytopenia subsequent to prior friction from heavy use of a video game controller allowed for traumatic subcorneal hemorrhage of the hands. Using our case as a springboard, we summarize other reports with video game associated pathologies in the medical literature. Overall, cognizance of the popularity of video games and related pathologies can be an asset for dermatologists who evaluate pediatric patients.

  4. Subcorneal hematomas in excessive video game play.

    Science.gov (United States)

    Lennox, Maria; Rizzo, Jason; Lennox, Luke; Rothman, Ilene

    2016-01-01

    We report a case of subcorneal hematomas caused by excessive video game play in a 19-year-old man. The hematomas occurred in a setting of thrombocytopenia secondary to induction chemotherapy for acute myeloid leukemia. It was concluded that thrombocytopenia subsequent to prior friction from heavy use of a video game controller allowed for traumatic subcorneal hemorrhage of the hands. Using our case as a springboard, we summarize other reports with video game associated pathologies in the medical literature. Overall, cognizance of the popularity of video games and related pathologies can be an asset for dermatologists who evaluate pediatric patients. PMID:26919354

  5. Connecting the LHC diphoton excess to the Galatic center gamma-ray excess

    CERN Document Server

    Huang, Xian-Jun; Zhou, Yu-Feng

    2016-01-01

    The recent LHC Run-2 data have shown a possible excess in diphoton events, suggesting the existence of a new resonance $\\phi$ with mass $M\\sim 750$~GeV. If $\\phi$ plays the role of a portal particle connecting the Standard Model and the invisible dark sector, the diphoton excess should be correlated with another photon excess, namely, the excess in the diffuse gamma rays towards the Galactic center, which can be interpreted by the annihilation of dark matter(DM). We investigate the necessary conditions for a consistent explanation for the two photon excesses, especially the requirement on the width-to-mass ratio $\\Gamma/M$ and $\\phi$ decay channels, in a collection of DM models where the DM particle can be scalar, fermionionic and vector, and $\\phi$ can be generated through $s$-channel $gg$ fusion or $q\\bar q$ annihilation. We show that the minimally required $\\Gamma/M$ is determined by a single parameter proportional to $(m_{\\chi}/M)^{n}$, where the integer $n$ depends on the nature of the DM particle. We fi...

  6. Z-peaked excess in goldstini scenarios

    CERN Document Server

    Liew, Seng Pei; Mawatari, Kentarou; Sakurai, Kazuki; Vereecken, Matthias

    2015-01-01

    We study a possible explanation of a 3.0 $\\sigma$ excess recently reported by the ATLAS Collaboration in events with Z-peaked same-flavour opposite-sign lepton pair, jets and large missing transverse momentum in the context of gauge-mediated SUSY breaking with more than one hidden sector, the so-called goldstini scenario. In a certain parameter space, the gluino two-body decay chain $\\tilde g\\to g\\tilde\\chi^0_{1,2}\\to gZ\\tilde G'$ becomes dominant, where $\\tilde\\chi^0_{1,2}$ and $\\tilde G'$ are the Higgsino-like neutralino and the massive pseudo-goldstino, respectively, and gluino pair production can contribute to the signal. We find that a mass spectrum such as $m_{\\tilde g}\\sim 900$ GeV, $m_{\\tilde\\chi^0_{1,2}}\\sim 700$ GeV and $m_{\\tilde G'}\\sim 600$ GeV demonstrates the rate and the distributions of the excess, without conflicting with the stringent constraints from jets plus missing energy analyses and with the CMS constraint on the identical final state.

  7. Effective interpretations of a diphoton excess

    CERN Document Server

    Berthier, Laure; Shepherd, William; Trott, Michael

    2015-01-01

    We discuss some consistency tests that must be passed for a successful explanation of a diphoton excess at larger mass scales, generated by a scalar or pseudoscalar state, possibly of a composite nature, decaying to two photons. Scalar states at mass scales above the electroweak scale decaying significantly into photon final states generically lead to modifications of Standard Model Higgs phenomenology. We characterise this effect using the formalism of Effective Field Theory (EFT) and study the modification of the effective couplings to photons and gluons of the Higgs. The modification of Higgs phenomenology comes about in a variety of ways. For scalar $0^+$states, the Higgs and the heavy boson can mix. Lower energy phenomenology gives a limit on the mixing angle, which gets generated at one loop in any such theory explaining the diphoton excess. Even if the mixing angle is set to zero, we demonstrate that a relation exists between lower energy Higgs data and a massive scaler decaying to diphoton final state...

  8. Effective interpretations of a diphoton excess

    Science.gov (United States)

    Berthier, Laure; Cline, James M.; Shepherd, William; Trott, Michael

    2016-04-01

    We discuss some consistency tests that must be passed for a successful explanation of a diphoton excess at larger mass scales, generated by a scalar or pseudoscalar state, possibly of a composite nature, decaying to two photons. Scalar states at mass scales above the electroweak scale decaying significantly into photon final states generically lead to modifications of Standard Model Higgs phenomenology. We characterise this effect using the formalism of Effective Field Theory (EFT) and study the modification of the effective couplings to photons and gluons of the Higgs. The modification of Higgs phenomenology comes about in a variety of ways. For scalar 0+ states, a component of the Higgs and the heavy boson can mix. Lower energy phenomenology gives a limit on the mixing angle, which gets generated at one loop in any theory explaining the diphoton excess. Even if the mixing angle is set to zero, we demonstrate that a relation exists between lower energy Higgs data and a massive scalar decaying to diphoton final states. If the new boson is a pseudoscalar, we note that if it is composite, it is generic to have an excited scalar partner that can mix with a component of the Higgs, which has a stronger coupling to photons. In the case of a pseudoscalar, we also characterize how lower energy Higgs phenomenology is directly modified using EFT, even without assuming a scalar partner of the pseudoscalar state. We find that naturalness concerns can be accommodated, and that pseudoscalar models are more protected from lower energy constraints.

  9. Extragalactic Gamma Ray Excess from Coma Supercluster Direction

    Indian Academy of Sciences (India)

    Pantea Davoudifar; S. Jalil Fatemi

    2011-09-01

    The origin of extragalactic diffuse gamma ray is not accurately known, especially because our suggestions are related to many models that need to be considered either to compute the galactic diffuse gamma ray intensity or to consider the contribution of other extragalactic structures while surveying a specific portion of the sky. More precise analysis of EGRET data however, makes it possible to estimate the diffuse gamma ray in Coma supercluster (i.e., Coma\\A1367 supercluster) direction with a value of ( > 30MeV) ≃ 1.9 × 10-6 cm-2 s-1, which is considered to be an upper limit for the diffuse gamma ray due to Coma supercluster. The related total intensity (on average) is calculated to be ∼ 5% of the actual diffuse extragalactic background. The calculated intensity makes it possible to estimate the origin of extragalactic diffuse gamma ray.

  10. Real-time total system error estimation:Modeling and application in required navigation performance

    Institute of Scientific and Technical Information of China (English)

    Fu Li; Zhang Jun; Li Rui

    2014-01-01

    In required navigation performance (RNP), total system error (TSE) is estimated to pro-vide a timely warning in the presence of an excessive error. In this paper, by analyzing the under-lying formation mechanism, the TSE estimation is modeled as the estimation fusion of a fixed bias and a Gaussian random variable. To address the challenge of high computational load induced by the accurate numerical method, two efficient methods are proposed for real-time application, which are called the circle tangent ellipse method (CTEM) and the line tangent ellipse method (LTEM), respectively. Compared with the accurate numerical method and the traditional scalar quantity summation method (SQSM), the computational load and accuracy of these four methods are exten-sively analyzed. The theoretical and experimental results both show that the computing time of the LTEM is approximately equal to that of the SQSM, while it is only about 1/30 and 1/6 of that of the numerical method and the CTEM. Moreover, the estimation result of the LTEM is parallel with that of the numerical method, but is more accurate than those of the SQSM and the CTEM. It is illustrated that the LTEM is quite appropriate for real-time TSE estimation in RNP application.

  11. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    Science.gov (United States)

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to

  12. Propylene Glycol Poisoning From Excess Whiskey Ingestion

    Directory of Open Access Journals (Sweden)

    Courtney A. Cunningham MD

    2015-09-01

    Full Text Available In this report, we describe a case of high anion gap metabolic acidosis with a significant osmolal gap attributed to the ingestion of liquor containing propylene glycol. Recently, several reports have characterized severe lactic acidosis occurring in the setting of iatrogenic unintentional overdosing of medications that use propylene glycol as a diluent, including lorazepam and diazepam. To date, no studies have explored potential effects of excess propylene glycol in the setting of alcohol intoxication. Our patient endorsed drinking large volumes of cinnamon flavored whiskey, which was likely Fireball Cinnamon Whisky. To our knowledge, this is the first case of propylene glycol toxicity from an intentional ingestion of liquor containing propylene glycol.

  13. Faint Infrared-Excess Field Galaxies FROGs

    CERN Document Server

    Moustakas, L A; Zepf, S E; Bunker, A J

    1997-01-01

    Deep near-infrared and optical imaging surveys in the field reveal a curious population of galaxies that are infrared-bright (I-K>4), yet with relatively blue optical colors (V-I20, is high enough that if placed at z>1 as our models suggest, their space densities are about one-tenth of phi-*. The colors of these ``faint red outlier galaxies'' (fROGs) may derive from exceedingly old underlying stellar populations, a dust-embedded starburst or AGN, or a combination thereof. Determining the nature of these fROGs, and their relation with the I-K>6 ``extremely red objects,'' has implications for our understanding of the processes that give rise to infrared-excess galaxies in general. We report on an ongoing study of several targets with HST & Keck imaging and Keck/LRIS multislit spectroscopy.

  14. Cool WISPs for stellar cooling excesses

    CERN Document Server

    Giannotti, Maurizio; Redondo, Javier; Ringwald, Andreas

    2015-01-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  15. A case of postmenopausal androgen excess.

    Science.gov (United States)

    Lambrinoudaki, Irene; Dafnios, Nikos; Kondi-Pafiti, Agathi; Triantafyllou, Nikos; Karopoulou, Evangelia; Papageorgiou, Anastasia; Augoulea, Areti; Armeni, Eleni; Creatsa, Maria; Vlahos, Nikolaos

    2015-10-01

    Ovarian steroid cell tumors are very rare but potentially life-threatening neoplasms. They represent less than 0.1% of all ovarian tumors, typically present in premenopausal women and frequently manifest with virilization. Signs of hyperandrogenism may appear in postmenopausal women due to tumorοus and non-tumorοus adrenal and ovarian causes as well due to the normal aging process. In any case, steroid cell tumor should be suspected in postmenopausal women who present with rapid progressive androgen excess symptoms. This report describes a case of a 67-year-old postmenopausal woman with signs of hyperandrogenism, where an ovarian steroid cell tumor was diagnosed and treated by laparoscopic bilateral salpingo-oophorectomy and synchronous hysterectomy. PMID:26287476

  16. On dilatons and the LHC diphoton excess

    Science.gov (United States)

    Megías, Eugenio; Pujolàs, Oriol; Quirós, Mariano

    2016-05-01

    We study soft wall models that can embed the Standard Model and a naturally light dilaton. Exploiting the full capabilities of these models we identify the parameter space that allows to pass Electroweak Precision Tests with a moderate Kaluza-Klein scale, around 2 TeV. We analyze the coupling of the dilaton with Standard Model (SM) fields in the bulk, and discuss two applications: i) Models with a light dilaton as the first particle beyond the SM pass quite easily all observational tests even with a dilaton lighter than the Higgs. However the possibility of a 125 GeV dilaton as a Higgs impostor is essentially disfavored; ii) We show how to extend the soft wall models to realize a 750 GeV dilaton that could explain the recently reported diphoton excess at the LHC.

  17. Cool WISPs for stellar cooling excesses

    Energy Technology Data Exchange (ETDEWEB)

    Giannotti, Maurizio [Barry Univ., Miami Shores, FL (United States). Physical Sciences; Irastorza, Igor [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Redondo, Javier [Zaragoza Univ. (Spain). Dept. de Fisica Teorica; Max-Planck-Institut fuer Physik, Muenchen (Germany); Ringwald, Andreas [DESY Hamburg (Germany). Theory Group

    2015-12-15

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a preference for a mild non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP represents the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO.

  18. The Diphoton Excess Inspired Electroweak Baryogenesis

    CERN Document Server

    Chao, Wei

    2016-01-01

    A resonance in the diphoton channel with invariant mass $m_{\\gamma \\gamma} = 750$ GeV was claimed by the ATLAS and CMS collaborations at the run-2 LHC with $\\sqrt{s}=13$ TeV. In this paper, we explain this diphoton excess as a pseudo-scalar singlet, which is produced at the LHC through gluon fusion with exotic scalar quarks running in the loop. We point out the scalar singlet might trigger a two-step electroweak phase transition, which can be strongly first oder. By assuming there are CP violations associated with interactions of the scalar quarks, the model is found to be able to generate adequate baryon asymmetry of the Universe through the electroweak baryogenesis mechanism. Constraints on the model are studied.

  19. Excess plutonium disposition using ALWR technology

    International Nuclear Information System (INIS)

    The Office of Nuclear Energy of the Department of Energy chartered the Plutonium Disposition Task Force in August 1992. The Task Force was created to assess the range of practicable means of disposition of excess weapons-grade plutonium. Within the Task Force, working groups were formed to consider: (1) storage, (2) disposal,and(3) fission options for this disposition,and a separate group to evaluate nonproliferation concerns of each of the alternatives. As a member of the Fission Working Group, the Savannah River Technology Center acted as a sponsor for light water reactor (LWR) technology. The information contained in this report details the submittal that was made to the Fission Working Group of the technical assessment of LWR technology for plutonium disposition. The following aspects were considered: (1) proliferation issues, (2) technical feasibility, (3) technical availability, (4) economics, (5) regulatory issues, and (6) political acceptance

  20. Short inter-pregnancy intervals, parity, excessive pregnancy weight gain and risk of maternal obesity.

    Science.gov (United States)

    Davis, Esa M; Babineau, Denise C; Wang, Xuelei; Zyzanski, Stephen; Abrams, Barbara; Bodnar, Lisa M; Horwitz, Ralph I

    2014-04-01

    To investigate the relationship among parity, length of the inter-pregnancy intervals and excessive pregnancy weight gain in the first pregnancy and the risk of obesity. Using a prospective cohort study of 3,422 non-obese, non-pregnant US women aged 14-22 years at baseline, adjusted Cox models were used to estimate the association among parity, inter-pregnancy intervals, and excessive pregnancy weight gain in the first pregnancy and the relative hazard rate (HR) of obesity. Compared to nulliparous women, primiparous women with excessive pregnancy weight gain in the first pregnancy had a HR of obesity of 1.79 (95% CI 1.40, 2.29); no significant difference was seen between primiparous without excessive pregnancy weight gain in the first pregnancy and nulliparous women. Among women with the same pregnancy weight gain in the first pregnancy and the same number of inter-pregnancy intervals (12 and 18 months or ≥18 months), the HR of obesity increased 2.43-fold (95% CI 1.21, 4.89; p = 0.01) for every additional inter-pregnancy interval of pregnancy intervals. Among women with the same parity and inter-pregnancy interval pattern, women with excessive pregnancy weight gain in the first pregnancy had an HR of obesity 2.41 times higher (95% CI 1.81, 3.21; p obesity risk unless the primiparous women had excessive pregnancy weight gain in the first pregnancy, then their risk of obesity was greater. Multiparous women with the same excessive pregnancy weight gain in the first pregnancy and at least one additional short inter-pregnancy interval had a significant risk of obesity after childbirth. Perinatal interventions that prevent excessive pregnancy weight gain in the first pregnancy or lengthen the inter-pregnancy interval are necessary for reducing maternal obesity.

  1. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  2. Excessive anticoagulation with warfarin or phenprocoumon may have multiple causes

    DEFF Research Database (Denmark)

    Meegaard, Peter Martin; Holck, Line H V; Pottegård, Anton;

    2012-01-01

    Excessive anticoagulation with vitamin K antagonists is a serious condition with a substantial risk of an adverse outcome. We thus found it of interest to review a large case series to characterize the underlying causes of excessive anticoagulation....

  3. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  4. Excess Liquidity Control Requires a Multi-Pronged Approach

    Institute of Scientific and Technical Information of China (English)

    李晓西; 和晋予

    2007-01-01

    As China opens wider to the outside world, excess liquidity has become an outstanding issue in economic operations. This paper holds that China’s liquidity is relatively excessive at the present time, and such excess liquidity is a reflection of domestic and foreign economic contradictions on the monetary level. Excess liquidity will pose a potential hazard to macroeconomic operation, and a multi-pronged approach should be taken to curb and control its sources at home and abroad.

  5. Childhood maltreatment and the risk of pre-pregnancy obesity and excessive gestational weight gain.

    Science.gov (United States)

    Diesel, Jill C; Bodnar, Lisa M; Day, Nancy L; Larkby, Cynthia A

    2016-07-01

    The objective of this study was to estimate whether maternal history of childhood maltreatment was associated with pre-pregnancy obesity or excessive gestational weight gain. Pregnant women (n = 472) reported pre-pregnancy weight and height and gestational weight gain and were followed up to 16 years post-partum when they reported maltreatment on the Childhood Trauma Questionnaire (CTQ). CTQ score ranged from no maltreatment (25) to severe maltreatment (125). Prenatal mental health modified the association between CTQ score and maternal weight (P childhood may contribute to pre-pregnancy obesity and excessive gestational weight gain. PMID:25138565

  6. Modelling and Identification of Oxygen Excess Ratio of Self-Humidified PEM Fuel Cell System

    Directory of Open Access Journals (Sweden)

    Edi Leksono

    2012-07-01

    Full Text Available One essential parameter in fuel cell operation is oxygen excess ratio which describes comparison between reacted and supplied oxygen number in cathode. Oxygen excess ratio relates to fuel cell safety and lifetime. This paper explains development of air feed model and oxygen excess ratio calculation in commercial self-humidified PEM fuel cell system with 1 kW output power. This modelling was developed from measured data which was limited in open loop system. It was carried out to get relationship between oxygen excess ratio with stack output current and fan motor voltage. It generated fourth-order 56.26% best fit ARX linear polynomial model estimation (loss function = 0.0159, FPE = 0.0159 and second-order ARX nonlinear model estimation with 75 units of wavenet estimator with 84.95% best fit (loss function = 0.0139. The second-order ARX model linearization yielded 78.18% best fit (loss function = 0.0009, FPE = 0.0009.

  7. A Systematic Search for Low-mass Field Stars with Large Infrared Excesses

    Science.gov (United States)

    Theissen, Christopher; West, Andrew A.

    2016-06-01

    We present a systematic search for low-mass field stars exhibiting extreme infrared (IR) excesses. One potential cause of the IR excess is the collision of terrestrial worlds. Our input stars are from the Motion Verified Red Stars (MoVeRS) catalog. Candidate stars are then selected based on large deviations (3σ) between their measured Wide-field Infrared Survey Explorer (WISE) 12 μm flux and their expected flux (as estimated from stellar models). We investigate the stellar mass and time dependence for stars showing extreme IR excesses, using photometric colors from the Sloan Digital Sky Survey (SDSS) and Galactic height as proxies for mass and time, respectively. Using a Galactic kinematic model, we estimate the completeness for our sample as a function of line-of-sight through the Galaxy, estimating the number of low-mass stars that should exhibit extreme IR excesses within a local volume. The potential for planetary collisions to occur over a large range of stellar masses and ages has serious implications for the habitability of planetary systems around low-mass stars.

  8. 30 CFR 75.401-1 - Excessive amounts of dust.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Excessive amounts of dust. 75.401-1 Section 75... § 75.401-1 Excessive amounts of dust. The term “excessive amounts of dust” means coal and float coal dust in the air in such amounts as to create the potential of an explosion hazard....

  9. 46 CFR 154.546 - Excess flow valve: Closing flow.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Excess flow valve: Closing flow. 154.546 Section 154.546... and Process Piping Systems § 154.546 Excess flow valve: Closing flow. (a) The rated closing flow of vapor or liquid cargo for an excess flow valve must be specially approved by the Commandant (CG-522)....

  10. The LHC diphoton excess as a W-ball

    CERN Document Server

    Arbuzov, B A

    2016-01-01

    We consider a possibility of the 750 GeV diphoton excess at the LHC to correspond to heavy $WW$ zero spin resonance. The resonance appears due to the would-be anomalous triple interaction of the weak bosons, which is defined by well-known coupling constant $\\lambda$. The $\\gamma\\gamma\\,\\,750\\, GeV$ anomaly may correspond to weak isotopic spin 0 pseudoscalar state. We obtain estimates for the effect, which qualitatively agree with ATLAS data. Effects are predicted in a production of $W^+ W^-, (Z,\\gamma) (Z,\\gamma)$ via resonance $X_{PS}$ with $M_{PS} \\simeq 750\\,GeV$, which could be reliably checked at the upgraded LHC at $\\sqrt{s}\\,=\\,13\\, TeV$. In the framework of an approach to the spontaneous generation of the triple anomalous interaction its coupling constant is estimated to be $\\lambda = -\\,0.020\\pm 0.005$ in an agreement with existing restrictions. Specific prediction of the hypothesis is the significant effect in decay channel $X_{PS} \\to \\gamma\\,l^+\\,l^-\\,(l = e,\\,\\mu)$, which branching ratio occurs t...

  11. Estimating Cosmological Parameter Covariance

    CERN Document Server

    Taylor, Andy

    2014-01-01

    We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...

  12. A Comprehensive Census of Nearby Infrared Excess Stars

    Science.gov (United States)

    Cotten, Tara H.; Song, Inseok

    2016-07-01

    The conclusion of the Wide-Field Infrared Survey Explorer (WISE) mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as the James Webb Space Telescope. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and all-sky WISE (AllWISE) catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3σ or 5σ significance of excess in the mid- and far-infrared. Through procedures including spectral energy distribution fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 “Prime” infrared excess stars, of which 74 are new sources of excess, and >1200 are “Reserved” stars, of which 950 are new sources of excess. The main catalog of infrared excess stars are nearby, bright, and either demonstrate excess in more than one passband or have infrared spectroscopy confirming the infrared excess. This study identifies stars that display a spectral energy distribution suggestive of a secondary or post-protoplanetary generation of dust, and they are ideal targets for future optical and infrared imaging observations. The final catalogs of stars summarize the past work using infrared excess to detect dust disks, and with the most extensive compilation of infrared excess stars (˜1750) to date, we investigate various relationships among stellar and disk parameters.

  13. Sonolência excessiva Excessive daytime sleepiness

    Directory of Open Access Journals (Sweden)

    Lia Rita Azeredo Bittencourt

    2005-05-01

    Full Text Available A sonolência é uma função biológica, definida como uma probabilidade aumentada para dormir. Já a sonolência excessiva (SE, ou hipersonia, refere-se a uma propensão aumentada ao sono com uma compulsão subjetiva para dormir, tirar cochilos involuntários e ataques de sono, quando o sono é inapropriado. As principais causas de sonolência excessiva são a privação crônica de sono (sono insuficiente, a Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono (SAHOS, a narcolepsia, a Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros (SPI/MPM, Distúrbios do Ritmo Circadiano, uso de drogas e medicações e a hipersonia idiopática. As principais conseqüências são prejuízo no desempenho nos estudos, no trabalho, nas relações familiares e sociais, alterações neuropsicológicas e cognitivas e risco aumentado de acidentes. O tratamento da sonolência excessiva deve estar voltado para as causas específicas. Na privação voluntária do sono, aumentar o tempo de sono e higiene do sono, o uso do CPAP (Continuous Positive Airway Pressure na Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono, exercícios e agentes dopaminérgicos na Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros, fototerapia e melatonina nos Distúrbios do Ritmo Circadiano, retiradas de drogas que causam sonolência excessiva e uso de estimulantes da vigília.Sleepiness is a physiological function, and can be defined as increased propension to fall asleep. However, excessive sleepiness (ES or hypersomnia refer to an abnormal increase in the probability to fall asleep, to take involuntary naps, or to have sleep atacks, when sleep is not desired. The main causes of excessive sleepiness is chronic sleep deprivation, sleep apnea syndrome, narcolepsy, movement disorders during sleep, circadian sleep disorders, use of drugs and medications, or idiopathic hypersomnia. Social, familial, work, and cognitive impairment are among the consequences of

  14. Mechanisms for Reduced Excess Sludge Production in the Cannibal Process.

    Science.gov (United States)

    Labelle, Marc-André; Dold, Peter L; Comeau, Yves

    2015-08-01

    Reducing excess sludge production is increasingly attractive as a result of rising costs and constraints with respect to sludge treatment and disposal. A technology in which the mechanisms remain not well understood is the Cannibal process, for which very low sludge yields have been reported. The objective of this work was to use modeling as a means to characterize excess sludge production at a full-scale Cannibal facility by providing a long sludge retention time and removing trash and grit by physical processes. The facility was characterized by using its historical data, from discussion with the staff and by conducting a sampling campaign to prepare a solids inventory and an overall mass balance. At the evaluated sludge retention time of 400 days, the sum of the daily loss of suspended solids to the effluent and of the waste activated sludge solids contributed approximately equally to the sum of solids that are wasted daily as trash and grit from the solids separation module. The overall sludge production was estimated to be 0.14 g total suspended solids produced/g chemical oxygen demand removed. The essential functions of the Cannibal process for the reduction of sludge production appear to be to remove trash and grit from the sludge by physical processes of microscreening and hydrocycloning, respectively, and to provide a long sludge retention time, which allows the slow degradation of the "unbiodegradable" influent particulate organics (XU,Inf) and the endogenous residue (XE). The high energy demand of 1.6 kWh/m³ of treated wastewater at the studied facility limits the niche of the Cannibal process to small- to medium-sized facilities in which sludge disposal costs are high but electricity costs are low.

  15. Mechanisms for Reduced Excess Sludge Production in the Cannibal Process.

    Science.gov (United States)

    Labelle, Marc-André; Dold, Peter L; Comeau, Yves

    2015-08-01

    Reducing excess sludge production is increasingly attractive as a result of rising costs and constraints with respect to sludge treatment and disposal. A technology in which the mechanisms remain not well understood is the Cannibal process, for which very low sludge yields have been reported. The objective of this work was to use modeling as a means to characterize excess sludge production at a full-scale Cannibal facility by providing a long sludge retention time and removing trash and grit by physical processes. The facility was characterized by using its historical data, from discussion with the staff and by conducting a sampling campaign to prepare a solids inventory and an overall mass balance. At the evaluated sludge retention time of 400 days, the sum of the daily loss of suspended solids to the effluent and of the waste activated sludge solids contributed approximately equally to the sum of solids that are wasted daily as trash and grit from the solids separation module. The overall sludge production was estimated to be 0.14 g total suspended solids produced/g chemical oxygen demand removed. The essential functions of the Cannibal process for the reduction of sludge production appear to be to remove trash and grit from the sludge by physical processes of microscreening and hydrocycloning, respectively, and to provide a long sludge retention time, which allows the slow degradation of the "unbiodegradable" influent particulate organics (XU,Inf) and the endogenous residue (XE). The high energy demand of 1.6 kWh/m³ of treated wastewater at the studied facility limits the niche of the Cannibal process to small- to medium-sized facilities in which sludge disposal costs are high but electricity costs are low. PMID:26237684

  16. STUDY OF MOLECULAR INTERACTIONS IN BINARY MIXTURES USING EXCESS PARAMETERS

    OpenAIRE

    Narendra Kolla

    2014-01-01

    Speeds of sound, densities and viscosities of the binary mixture of anisaldehyde with nonanol were measured over the entire mole fraction at (303.15, 308.15, 313.15 and 318.15) K E E and normal atmospheric pressure. Excess molar volume, V , Excess internal pressure, π , m E *E excess enthalpy, H , excess Gibb's free energy of activation for viscous flow, G , and excess E E viscosity,η have been calculated using experimental data. The V values are positive whereas m ...

  17. ATLAS Z Excess in Minimal Supersymmetric Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Xiaochuan [California Univ., Berkeley, CA (United States). Berkeley Center for Theoretical Physics; California Univ., Berkeley, CA (United States). Lawrence Berkeley National Laboratory; Shirai, Satoshi [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Terada, Takahiro [Tokyo Univ. (Japan). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-06-15

    Recently the ATLAS collaboration reported a 3 sigma excess in the search for the events containing a dilepton pair from a Z boson and large missing transverse energy. Although the excess is not sufficiently significant yet, it is quite tempting to explain this excess by a well-motivated model beyond the standard model. In this paper we study a possibility of the minimal supersymmetric standard model (MSSM) for this excess. Especially, we focus on the MSSM spectrum where the sfermions are heavier than the gauginos and Higgsinos. We show that the excess can be explained by the reasonable MSSM mass spectrum.

  18. 75 FR 30846 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income (Correction)

    Science.gov (United States)

    2010-06-02

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income (Correction... comments on the subject proposal. Project owners are permitted to retain Excess Income for projects under... Income. The request must be submitted at least 90 days before the beginning of each fiscal year, or...

  19. Invariant Image Watermarking Using Accurate Zernike Moments

    Directory of Open Access Journals (Sweden)

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  20. Misperceived pre-pregnancy body weight status predicts excessive gestational weight gain: findings from a US cohort study

    Directory of Open Access Journals (Sweden)

    Rifas-Shiman Sheryl L

    2008-12-01

    Full Text Available Abstract Background Excessive gestational weight gain promotes poor maternal and child health outcomes. Weight misperception is associated with weight gain in non-pregnant women, but no data exist during pregnancy. The purpose of this study was to examine the association of misperceived pre-pregnancy body weight status with excessive gestational weight gain. Methods At study enrollment, participants in Project Viva reported weight, height, and perceived body weight status by questionnaire. Our study sample comprised 1537 women who had either normal or overweight/obese pre-pregnancy BMI. We created 2 categories of pre-pregnancy body weight status misperception: normal weight women who identified themselves as overweight ('overassessors' and overweight/obese women who identified themselves as average or underweight ('underassessors'. Women who correctly perceived their body weight status were classified as either normal weight or overweight/obese accurate assessors. We performed multivariable logistic regression to determine the odds of excessive gestational weight gain according to 1990 Institute of Medicine guidelines. Results Of the 1029 women with normal pre-pregnancy BMI, 898 (87% accurately perceived and 131 (13% overassessed their weight status. 508 women were overweight/obese, of whom 438 (86% accurately perceived and 70 (14% underassessed their pre-pregnancy weight status. By the end of pregnancy, 823 women (54% gained excessively. Compared with normal weight accurate assessors, the adjusted odds of excessive gestational weight gain was 2.0 (95% confidence interval [CI]: 1.3, 3.0 in normal weight overassessors, 2.9 (95% CI: 2.2, 3.9 in overweight/obese accurate assessors, and 7.6 (95% CI: 3.4, 17.0 in overweight/obese underassessors. Conclusion Misperceived pre-pregnancy body weight status was associated with excessive gestational weight gain among both normal weight and overweight/obese women, with the greatest likelihood of excessive

  1. On the Fluctuation Induced Excess Conductivity in Stainless Steel Sheathed MgB2 Tapes

    Directory of Open Access Journals (Sweden)

    Suchitra Rajput

    2013-01-01

    Full Text Available We report on the analyses of fluctuation induced excess conductivity in the - behavior in the in situ prepared MgB2 tapes. The scaling functions for critical fluctuations are employed to investigate the excess conductivity of these tapes around transition. Two scaling models for excess conductivity in the absence of magnetic field, namely, first, Aslamazov and Larkin model, second, Lawrence and Doniach model, have been employed for the study. Fitting the experimental - data with these models indicates the three-dimensional nature of conduction of the carriers as opposed to the 2D character exhibited by the HTSCs. The estimated amplitude of coherence length from the fitted model is ~21 Å.

  2. Zirconia ceramics for excess weapons plutonium waste

    Science.gov (United States)

    Gong, W. L.; Lutze, W.; Ewing, R. C.

    2000-01-01

    We synthesized a zirconia (ZrO 2)-based single-phase ceramic containing simulated excess weapons plutonium waste. ZrO 2 has large solubility for other metallic oxides. More than 20 binary systems A xO y-ZrO 2 have been reported in the literature, including PuO 2, rare-earth oxides, and oxides of metals contained in weapons plutonium wastes. We show that significant amounts of gadolinium (neutron absorber) and yttrium (additional stabilizer of the cubic modification) can be dissolved in ZrO 2, together with plutonium (simulated by Ce 4+, U 4+ or Th 4+) and impurities (e.g., Ca, Mg, Fe, Si). Sol-gel and powder methods were applied to make homogeneous, single-phase zirconia solid solutions. Pu waste impurities were completely dissolved in the solid solutions. In contrast to other phases, e.g., zirconolite and pyrochlore, zirconia is extremely radiation resistant and does not undergo amorphization. Baddeleyite (ZrO 2) is suggested as the natural analogue to study long-term radiation resistance and chemical durability of zirconia-based waste forms.

  3. Diboson Excess from a New Strong Force

    CERN Document Server

    Georgi, Howard

    2016-01-01

    We explore a "partial unification" model that could explain the diphoton event excess around $750 \\, \\rm GeV$ recently reported by the LHC experiments. A new strong gauge group is combined with the ordinary color and hypercharge gauge groups. The VEV responsible for the combination is of the order of the $SU(2)\\times U(1)$ breaking scale, but the coupling of the new physics to standard model particles is suppressed by the strong interaction of the new gauge group. This simple extension of the standard model has a rich phenomenology, including composite particles of the new confining gauge interaction, a coloron and a $Z'$ which are rather weakly coupled to standard model particles, and massive vector bosons charged under both the ordinary color and hypercharge gauge groups and the new strong gauge group. The new scalar glueball could have mass of around $750 \\, \\rm GeV$, be produced by gluon fusion and decay into two photons, both through loops of the new massive vector bosons. The simplest version of the mod...

  4. What controls deuterium excess in global precipitation?

    Directory of Open Access Journals (Sweden)

    S. Pfahl

    2014-04-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity (RH together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal timescales. Furthermore, we review arguments for an interpretation of long-term palaeoclimatic d changes in terms of moisture source temperature, and we conclude that there remains no sufficient evidence that would justify to neglect the influence of RH on such palaeoclimatic d variations. Hence, we suggest that either the interpretation of d variations in palaeorecords should be adapted to reflect climatic influences on RH during evaporation, in particular atmospheric circulation changes, or new arguments for an interpretation in terms of moisture source temperature will have to be provided based on future research.

  5. An assessment of planting flexibility options to reduce the excessive application of nitrogen fertilizer in the United States of America

    OpenAIRE

    W-Y Huang; N D Uri

    1992-01-01

    The analysis in this paper is directed at estimating the marginal value of a base acre, the nitrogen rate of fertilizer use, the corn yield, and the excess nitrogen fertilizer application rate under alternative policy options designed to encourage planting flexibility in response to changing relative agricultural commodity prices. Encouragement of planting flexibility via an option of detaching deficiency payments from the base acreage is an effective way to reduce the excessive application r...

  6. Nathan Field's theatre of excess: youth culture and bodily excess on the early modern stage

    OpenAIRE

    Orman, S.

    2014-01-01

    This dissertation argues for the reappraisal of Jacobean boy actors by acknowledging their status as youths. Focussing on the repertory of The Children of the Queen’s Revels and using the acting and playwriting career of Nathan Field as an extensive case-study, it argues, via an investigation into cultural and theatrical bodily excess, that the theatre was a profoundly significant space in which youth culture was shaped and problematised. In defining youth culture as a space for the assertion...

  7. Accurate characterisation of post moulding shrinkage of polymer parts

    DEFF Research Database (Denmark)

    Neves, L. C.; De Chiffre, L.; González-Madruga, D.;

    2015-01-01

    The work deals with experimental determination of the shrinkage of polymer parts after injection moulding. A fixture for length measurements on 8 parts at the same time was designed and manufactured in Invar, mounted with 8 electronic gauges, and provided with 3 temperature sensors. The fixture...... were compensated with respect to the effect from temperature variations during the measurements. Prediction of the length after stabilisation was carried out by fitting data at different stages of shrinkage. Uncertainty estimations were carried out and a procedure for the accurate characterisation...... of post moulding shrinkage of polymer parts was developed. Expanded uncertainties (k=2) of 3 μm were obtained....

  8. Imaging tests for accurate diagnosis of acute biliary pancreatitis

    DEFF Research Database (Denmark)

    Surlin, Valeriu; Săftoiu, Adrian; Dumitrescu, Daniela

    2014-01-01

    Gallstones represent the most frequent aetiology of acute pancreatitis in many statistics all over the world, estimated between 40%-60%. Accurate diagnosis of acute biliary pancreatitis (ABP) is of outmost importance because clearance of lithiasis [gallbladder and common bile duct (CBD)] rules out...... for the intraoperative diagnosis of choledocholithiasis. Routine exploration of the CBD in cases of patients scheduled for cholecystectomy after an attack of ABP was not proven useful. A significant rate of the so-called idiopathic pancreatitis is actually caused by microlithiasis and/or biliary sludge. In conclusion...

  9. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  10. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  11. Estimating Cloud Cover

    Science.gov (United States)

    Moseley, Christine

    2007-01-01

    The purpose of this activity was to help students understand the percentage of cloud cover and make more accurate cloud cover observations. Students estimated the percentage of cloud cover represented by simulated clouds and assigned a cloud cover classification to those simulations. (Contains 2 notes and 3 tables.)

  12. Excess molar volumes for CO sub 2 -CH sub 4 -N sub 2 mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, J.C. (Oak Ridge National Lab., TN (United States) Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences); Blencoe, J.G.; Joyce, D.B. (Oak Ridge National Lab., TN (United States)); Bodnar, R.J. (Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences)

    1992-01-01

    Vibrating-tube densimetry experiments are being performed to determine the excess molar volumes of single-phase CO{sub 2}-CH{sub 4}-N{sub 2} gas mixtures at pressures as high as 3500 bars and temperatures up to 500{degrees}C. In our initial experiments, we determined the P-V-T properties of: (1) CO{sub 2}-CH{sub 4}, CO{sub 2}N{sub 2}, CH{sub 4}-N{sub 2}, and CO{sub 2}-CH{sub 4}-N{sub 2} mixtures at 1000 bars. 50 {degrees}C: and (2) CO{sub 2}-CH{sub 4} mixtures from 100 to 1000 bars at 100{degrees}C. Excess molar volumes in the binary subsystems are very accurately represented by two-parameter Margules equations. Experimentally determined excess molar volumes are in fair to poor agreement with predictions from published equations of state. Geometric projection techniques based on binary system data yield from published equations of state. Geometric projection techniques based on binary system data yield calculated excess molar volume for CO{sub 2}-CH{sub 4}-N{sub 2} mixtures that are in good agreement with our experimental data. 7 refs., 8 figs.

  13. Observation-based global biospheric excess radiocarbon inventory 1963-2005

    Science.gov (United States)

    Naegler, Tobias; Levin, Ingeborg

    2009-09-01

    For the very first time, we present an observation-based estimate of the temporal development of the biospheric excess radiocarbon (14C) inventory IB14,E, i.e., the change in the biospheric 14C inventory relative to prebomb times (1940s). IB14,E was calculated for the period 1963-2005 with a simple budget approach as the difference between the accumulated excess 14C production by atmospheric nuclear bomb tests and the nuclear industry and observation-based reconstructions of the excess 14C inventories in the atmosphere and the ocean. IB14,E increased from the late 1950s onward to maximum values between 126 and 177 × 1026 atoms 14C between 1981 and 1985. In the early 1980s, the biosphere turned from a sink to a source of excess 14C. Consequently, IB14,E decreased to values of 108-167 × 1026 atoms 14C in 2005. The uncertainty of IB14,E is dominated by uncertainties in the total bomb 14C production and the oceanic excess 14C inventory. Unfortunately, atmospheric Δ14CO2 from the early 1980s lack the necessary precision to reveal the expected small change in the amplitude and phase of atmospheric Δ14C seasonal cycle due to the sign flip in the biospheric net 14C flux during that time.

  14. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available BACKGROUND: Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used. METHODS AND FINDINGS: Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility. CONCLUSIONS: Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations

  15. Kinetic model of excess activated sludge thermohydrolysis.

    Science.gov (United States)

    Imbierowicz, Mirosław; Chacuk, Andrzej

    2012-11-01

    Thermal hydrolysis of excess activated sludge suspensions was carried at temperatures ranging from 423 K to 523 K and under pressure 0.2-4.0 MPa. Changes of total organic carbon (TOC) concentration in a solid and liquid phase were measured during these studies. At the temperature 423 K, after 2 h of the process, TOC concentration in the reaction mixture decreased by 15-18% of the initial value. At 473 K total organic carbon removal from activated sludge suspension increased to 30%. It was also found that the solubilisation of particulate organic matter strongly depended on the process temperature. At 423 K the transfer of TOC from solid particles into liquid phase after 1 h of the process reached 25% of the initial value, however, at the temperature of 523 K the conversion degree of 'solid' TOC attained 50% just after 15 min of the process. In the article a lumped kinetic model of the process of activated sludge thermohydrolysis has been proposed. It was assumed that during heating of the activated sludge suspension to a temperature in the range of 423-523 K two parallel reactions occurred. One, connected with thermal destruction of activated sludge particles, caused solubilisation of organic carbon and an increase of dissolved organic carbon concentration in the liquid phase (hydrolysate). The parallel reaction led to a new kind of unsolvable solid phase, which was further decomposed into gaseous products (CO(2)). The collected experimental data were used to identify unknown parameters of the model, i.e. activation energies and pre-exponential factors of elementary reactions. The mathematical model of activated sludge thermohydrolysis appropriately describes the kinetics of reactions occurring in the studied system. PMID:22951329

  16. Factors influencing excessive daytime sleepiness in adolescents

    Directory of Open Access Journals (Sweden)

    Thiago de Souza Vilela

    2016-04-01

    Full Text Available Abstract Objective: Sleep deprivation in adolescents has lately become a health issue that tends to increase with higher stress prevalence, extenuating routines, and new technological devices that impair adolescents' bedtime. Therefore, this study aimed to assess the excessive sleepiness frequency and the factors that might be associated to it in this population. Methods: The cross-sectional study analyzed 531 adolescents aged 10–18 years old from two private schools and one public school. Five questionnaires were applied: the Cleveland Adolescent Sleepiness Questionnaire; the Sleep Disturbance Scale for Children; the Brazilian Economic Classification Criteria; the General Health and Sexual Maturation Questionnaire; and the Physical Activity Questionnaire. The statistical analyses were based on comparisons between schools and sleepiness and non-sleepiness groups, using linear correlation and logistic regression. Results: Sleep deprivation was present in 39% of the adolescents; sleep deficit was higher in private school adolescents (p < 0.001, and there was a positive correlation between age and sleep deficit (p < 0.001; r = 0.337. Logistic regression showed that older age (p = 0.002; PR: 1.21 [CI: 1.07–1.36] and higher score level for sleep hyperhidrosis in the sleep disturbance scale (p = 0.02; PR: 1.16 [CI: 1.02–1.32] were risk factors for worse degree of sleepiness. Conclusions: Sleep deficit appears to be a reality among adolescents; the results suggest a higher prevalence in students from private schools. Sleep deprivation is associated with older age in adolescents and possible presence of sleep disorders, such as sleep hyperhidrosis.

  17. Implication of zinc excess on soil health.

    Science.gov (United States)

    Wyszkowska, Jadwiga; Boros-Lajszner, Edyta; Borowik, Agata; Baćmaga, Małgorzata; Kucharski, Jan; Tomkiel, Monika

    2016-01-01

    This study was undertaken to evaluate zinc's influence on the resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease. The experiment was conducted in a greenhouse of the University of Warmia and Mazury (UWM) in Olsztyn, Poland. Plastic pots were filled with 3 kg of sandy loam with pHKCl - 7.0 each. The experimental variables were: zinc applied to soil at six doses: 100, 300, 600, 1,200, 2,400 and 4,800 mg of Zn(2+) kg(-1) in the form of ZnCl2 (zinc chloride), and species of plant: oat (Avena sativa L.) cv. Chwat and white mustard (Sinapis alba) cv. Rota. Soil without the addition of zinc served as the control. During the growing season, soil samples were subjected to microbiological analyses on experimental days 25 and 50 to determine the abundance of organotrophic bacteria, actinomyces and fungi, and the activity of dehydrogenases, catalase and urease, which provided a basis for determining the soil resistance index (RS). The physicochemical properties of soil were determined after harvest. The results of this study indicate that excessive concentrations of zinc have an adverse impact on microbial growth and the activity of soil enzymes. The resistance of organotrophic bacteria, actinomyces, fungi, dehydrogenases, catalase and urease decreased with an increase in the degree of soil contamination with zinc. Dehydrogenases were most sensitive and urease was least sensitive to soil contamination with zinc. Zinc also exerted an adverse influence on the physicochemical properties of soil and plant development. The growth of oat and white mustard plants was almost completely inhibited in response to the highest zinc doses of 2,400 and 4,800 mg Zn(2+) kg(-1).

  18. Accurate atomic data for industrial plasma applications

    Energy Technology Data Exchange (ETDEWEB)

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  19. Water coordination structures and the excess free energy of the liquid

    CERN Document Server

    Merchant, Safir; Asthagiri, D

    2011-01-01

    For a distinguished water molecule, the solute water, we assess the contribution of each coordination state to its excess chemical potential, using a molecular aufbau approach. In this approach, we define a coordination sphere, the inner-shell, and separate the excess chemical potential into packing, outer-shell, and local chemical contributions; the coordination state is defined by the number of solvent water molecules within the coordination sphere. The packing term accounts for the free energy of creating a solute-free coordination sphere in the liquid. The outer-shell term accounts for the interaction of the solute with the fluid outside the coordination sphere and it is accurately described by a Gaussian model of hydration for coordination radii greater than the minimum of the oxygen-oxygen pair correlation function. Consistent with the conventional radial cut-off used for defining hydrogen-bonds in liquid water, theory helps identify a chemically meaningful coordination radius. The local chemical contri...

  20. Radiation. A buzz word for excessive fears

    International Nuclear Information System (INIS)

    The necessity of accepting that risk is an inherent part of daily life and also of acquiring a sense of perspective with respect to such risks, especially with respect to radiation, is discussed. Estimations of radiation risks are examined and compared to other risk factors such as overweight and cigarette smoking. It is stated that public perception of radiation has a direct bearing on the use of nuclear power, that balancing risks and benefits must become a standard approach to evaluating environmental matters and that the present crisis in confidence over energy requires this approach. (UK)

  1. Excess Weapons Plutonium Immobilization in Russia

    Energy Technology Data Exchange (ETDEWEB)

    Jardine, L.; Borisov, G.B.

    2000-04-15

    The joint goal of the Russian work is to establish a full-scale plutonium immobilization facility at a Russian industrial site by 2005. To achieve this requires that the necessary engineering and technical basis be developed in these Russian projects and the needed Russian approvals be obtained to conduct industrial-scale immobilization of plutonium-containing materials at a Russian industrial site by the 2005 date. This meeting and future work will provide the basis for joint decisions. Supporting R&D projects are being carried out at Russian Institutes that directly support the technical needs of Russian industrial sites to immobilize plutonium-containing materials. Special R&D on plutonium materials is also being carried out to support excess weapons disposition in Russia and the US, including nonproliferation studies of plutonium recovery from immobilization forms and accelerated radiation damage studies of the US-specified plutonium ceramic for immobilizing plutonium. This intriguing and extraordinary cooperation on certain aspects of the weapons plutonium problem is now progressing well and much work with plutonium has been completed in the past two years. Because much excellent and unique scientific and engineering technical work has now been completed in Russia in many aspects of plutonium immobilization, this meeting in St. Petersburg was both timely and necessary to summarize, review, and discuss these efforts among those who performed the actual work. The results of this meeting will help the US and Russia jointly define the future direction of the Russian plutonium immobilization program, and make it an even stronger and more integrated Russian program. The two objectives for the meeting were to: (1) Bring together the Russian organizations, experts, and managers performing the work into one place for four days to review and discuss their work with each other; and (2) Publish a meeting summary and a proceedings to compile reports of all the excellent

  2. Accurate 3D quantification of the bronchial parameters in MDCT

    Science.gov (United States)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  3. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  4. Fluctuation theorems for excess and housekeeping heats for underdamped systems

    OpenAIRE

    Lahiri, Sourabh; Jayannavar, A. M.

    2013-01-01

    We present a simple derivation of the integral fluctuation theorems for excess housekeeping heat for an underdamped Langevin system, without using the concept of dual dynamics. In conformity with the earlier results, we find that the fluctuation theorem for housekeeping heat holds when the steady state distributions are symmetric in velocity, whereas there is no such requirement for the excess heat. We first prove the integral fluctuation theorem for the excess heat, and then show that it nat...

  5. Futures trading and the excess comovement of commodity prices

    OpenAIRE

    Le Pen, Yannick; Sévi, Benoît

    2013-01-01

    We empirically reinvestigate the issue of excess comovement of commodity prices initially raised in Pindyck and Rotemberg (1990) and show that excess comovement, when it exists, can be related to hedging and speculative pressure in commodity futures markets. Excess comovement appears when commodity prices remain correlated even after adjusting for the impact of common factors. While Pindyck and Rotemberg and following c ontributions examine this issue using a relevant but arbitrary set of con...

  6. Increasing Trends in the Excess Comovement of Commodity Prices

    OpenAIRE

    Ohashi, Kazuhiko; OKIMOTO, Tatsuyoshi

    2013-01-01

    In this paper, we investigate whether excess correlations among seemingly unrelated commodity returns have increased recently, and if so, how they were achieved. To this end, we generalize the model of excess comovement, originated by Pindyck and Rotemberg (1990) and extended by Deb, Trivedi, and Varangis (1996), to develop the smooth-transition dynamic conditional correlation (STDCC) model that can capture long-run trends and short-run dynamics in excess comovements. Using commodity returns ...

  7. The Excessive Profits of Defense Contractors: Evidence and Determinants

    OpenAIRE

    Wang, Chong; Miguel, Joseph San

    2012-01-01

    A long controversial issue that divides academics, government officials, elected representatives, and the U.S. defense industry is whether defense contractors earn abnormal or excessive profits at the expense of taxpayers. Using an innovative industry-year-size matched measure of excessive profit, we demonstrate three findings. First, when compared with their industry peers, defense contractors earn excessive profits. This result is evident when profit is measured by Return ...

  8. Excess Demand and Cost Relationships Among Kentucky Nursing Homes

    OpenAIRE

    Mark A Davis; Freeman, James W.

    1994-01-01

    This article examines the influence of excess demand on nursing home costs. Previous work indicates that excess demand, reflected in a pervasive shortage of nursing home beds, constrains market competition and patient care expenditures. According to this view, nursing homes located in under-bedded markets can reduce costs and quality with impunity because there is no pressure to compete for residents. Predictions based on the excess demand argument were tested using 1989 data from a sample of...

  9. 46 CFR 154.550 - Excess flow valve: Bypass.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Excess flow valve: Bypass. 154.550 Section 154.550... and Process Piping Systems § 154.550 Excess flow valve: Bypass. If the excess flow valve allowed under § 154.532(b) has a bypass, the bypass must be of 1.0 mm (0.0394 in.) or less in diameter. Cargo Hose...

  10. Are exchange rates excessively volatile? And what does "excessively volatile" mean, anyway?

    OpenAIRE

    Gordon M. Bodnar; Leonardo Bartolini

    1996-01-01

    Using data for the major currencies from 1973 to 1994, we apply recent tests of asset price volatility to re-examine whether exchange rates have been excessively volatile with respect to the predictions of the monetary model of the exchange rate and of standard extensions that allow for sticky prices, sluggish money adjustment, and time-varying risk premia. Consistent with previous evidence from regression-based tests, most of the models that we examine are rejected by our volatility-based te...

  11. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Science.gov (United States)

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  12. Excessive erythrocytosis, chronic mountain sickness, and serum cobalt levels.

    Science.gov (United States)

    Jefferson, J Ashley; Escudero, Elizabeth; Hurtado, Maria-Elena; Pando, Jacqueline; Tapia, Rosario; Swenson, Erik R; Prchal, Josef; Schreiner, George F; Schoene, Robert B; Hurtado, Abdias; Johnson, Richard J

    2002-02-01

    In a subset of high-altitude dwellers, the appropriate erythrocytotic response becomes excessive and can result in chronic mountain sickness. We studied men with (study group) and without excessive erythrocytosis (packed-cell volume >65%) living in Cerro de Pasco, Peru (altitude 4300 m), and compared them with controls living in Lima, Peru (at sea-level). Toxic serum cobalt concentrations were detected in 11 of 21 (52%) study participants with excessive erythrocytosis, but were undetectable in high altitude or sea-level controls. In the mining community of Cerro de Pasco, cobalt toxicity might be an important contributor to excessive erythrocytosis.

  13. A Comprehensive Census of Nearby Infrared Excess Stars

    CERN Document Server

    Cotten, Tara H

    2016-01-01

    The conclusion of the WISE mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as JWST. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and AllWISE catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3$\\sigma$ or 5$\\sigma$ significance of excess in the mid- and far-infrared. Through procedures including SED fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false-positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 `Prime' infrared excess stars and $\\geq$1200 `Reserved' stars. The main catalog of infrared excess stars are nearby, b...

  14. Premium subsidies for health insurance: excessive coverage vs. adverse selection.

    Science.gov (United States)

    Selden, T M

    1999-12-01

    The tax subsidy for employment-related health insurance can lead to excessive coverage and excessive spending on medical care. Yet, the potential also exists for adverse selection to result in the opposite problem-insufficient coverage and underconsumption of medical care. This paper uses the model of Rothschild and Stiglitz (R-S) to show that a simple linear premium subsidy can correct market failure due to adverse selection. The optimal linear subsidy balances welfare losses from excessive coverage against welfare gains from reduced adverse selection. Indeed, a capped premium subsidy may mitigate adverse selection without creating incentives for excessive coverage.

  15. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  16. Accurate Finite Difference Methods for Option Pricing

    OpenAIRE

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  17. Accurate variational forms for multiskyrmion configurations

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  18. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  19. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every MRB_E2RF1...

  20. Clinical Impact of Antimicrobial Resistance in European Hospitals : Excess Mortality and Length of Hospital Stay Related to Methicillin-Resistant Staphylococcus aureus Bloodstream Infections

    NARCIS (Netherlands)

    de Kraker, Marlieke E. A.; Wolkewitz, Martin; Davey, Peter G.; Grundmann, Hajo

    2011-01-01

    Antimicrobial resistance is threatening the successful management of nosocomial infections worldwide. Despite the therapeutic limitations imposed by methicillin-resistant Staphylococcus aureus (MRSA), its clinical impact is still debated. The objective of this study was to estimate the excess mortal

  1. Accurate phase-shift velocimetry in rock

    Science.gov (United States)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  2. Excess relative risk of solid cancer mortality after prolonged exposure to naturally occurring high background radiation in Yangjiang, China

    Energy Technology Data Exchange (ETDEWEB)

    Sun Quanfu; Tao Zufan [Ministry of Health, Beijing (China). Lab. of Industrial Hygiene; Akiba, Suminori (and others)

    2000-10-01

    A study was made on cancer mortality in the high-background radiation areas of Yangjiang, China. Based on hamlet-specific environmental doses and sex- and age-specific occupancy factors, cumulative doses were calculated for each subject. In this article, we describe how the indirect estimation was made on individual dose and the methodology used to estimate radiation risk. Then, assuming a linear dose response relationship and using cancer mortality data for the period 1979-1995, we estimate the excess relative risk per Sievert for solid cancer to be -0.11 (95% CI, -0.67, 0.69). Also, we estimate the excess relative risks of four leading cancers in the study areas, i.e., cancers of the liver, nasopharynx, lung and stomach. In addition, we evaluate the effects of possible bias on our risk estimation. (author)

  3. Excess enthalpy, density, and speed of sound determination for the ternary mixture (methyl tert-butyl ether + 1-butanol + n-hexane)

    Energy Technology Data Exchange (ETDEWEB)

    Mascato, Eva [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Mariano, Alejandra [Laboratorio de Fisicoquimica, Departamento de Quimica, Facultad de Ingenieria, Universidad Nacional del Comahue, 8300 Neuquen (Argentina); Pineiro, Manuel M. [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain)], E-mail: mmpineiro@uvigo.es; Legido, Jose Luis [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Paz Andrade, M.I. [Departamento de Fisica Aplicada, Facultade de Fisica, Universidade de Santiago de Compostela, E-15706 Santiago de Compostela (Spain)

    2007-09-15

    Density, ({rho}), and speed of sound, (u), from T = 288.15 to T = 308.15 K, and excess molar enthalpies, (h{sup E}) at T = 298.15 K, have been measured over the entire composition range for (methyl tert-butyl ether + 1-butanol + n-hexane). In addition, excess molar volumes, V{sup E}, and excess isentropic compressibility, {kappa}{sub s}{sup E}, were calculated from experimental data. Finally, experimental excess enthalpies results are compared with the estimations obtained by applying the group-contribution models of UNIFAC (in the versions of Dang and Tassios, Larsen et al., Gmehling et al.), and DISQUAC.

  4. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Science.gov (United States)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  5. Estimation of physical parameters in induction motors

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  6. How dusty is alpha Centauri? Excess or non-excess over the infrared photospheres of main-sequence stars

    CERN Document Server

    Wiegert, J; Thébault, P; Olofsson, G; Mora, A; Bryden, G; Marshall, J P; Eiroa, C; Montesinos, B; Ardila, D; Augereau, J C; Aran, A Bayo; Danchi, W C; del Burgo, C; Ertel, S; Fridlund, M C W; Hajigholi, M; Krivov, A V; Pilbratt, G L; Roberge, A; White, G J

    2014-01-01

    [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses ...

  7. A Unified Theory of Rainfall Extremes, Rainfall Excesses, and IDF Curves

    Science.gov (United States)

    Veneziano, D.; Yoon, S.

    2012-04-01

    Extreme rainfall events are a key component of hydrologic risk management and design. Yet, a consistent mathematical theory of such extremes remains elusive. This study aims at laying new statistical foundations for such a theory. The quantities of interest are the distribution of the annual maximum, the distribution of the excess above a high threshold z, and the intensity-duration-frequency (IDF) curves. Traditionally, the modeling of annual maxima and excesses is based on extreme value (EV) and extreme excess (EE) theories. These theories establish that the maximum of n iid variables is attracted as n →∞ to a generalized extreme value (GEV) distribution with a certain index k and the distribution of the excess is attracted as z →∞ to a generalized Pareto distribution with the same index. The empirical value of k tends to decrease as the averaging duration d increases. To a first approximation, the IDF intensities scale with d and the return period T . Explanations for this approximate scaling behavior and theoretical predictions of the scaling exponents have emerged over the past few years. This theoretical work has been largely independent of that on the annual maxima and the excesses. Deviations from exact scaling include a tendency of the IDF curves to converge as d and T increase. To bring conceptual clarity and explain the above observations, we analyze the extremes of stationary multifractal measures, which provide good representations of rainfall within storms. These extremes follow from large deviation theory rather than EV/EE theory. A unified framework emerges that (a) encompasses annual maxima, excesses and IDF values without relying on EV or EE asymptotics, (b) predicts the index k and the IDF scaling exponents, (c) explains the dependence of k on d and the deviations from exact scaling of the IDF curves, and (d) explains why the empirical estimates of k tend to be positive (in the Frechet range) while, based on frequently assumed marginal

  8. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  9. Excess mortality monitoring in England and Wales during the influenza A(H1N1) 2009 pandemic.

    Science.gov (United States)

    Hardelid, P; Andrews, N; Pebody, R

    2011-09-01

    We present the results from a novel surveillance system for detecting excess all-cause mortality by age group in England and Wales developed during the pandemic influenza A(H1N1) 2009 period from April 2009 to March 2010. A Poisson regression model was fitted to age-specific mortality data from 1999 to 2008 and used to predict the expected number of weekly deaths in the absence of extreme health events. The system included adjustment for reporting delays. During the pandemic, excess all-cause mortality was seen in the 5-14 years age group, where mortality was flagged as being in excess for 1 week after the second peak in pandemic influenza activity; and in age groups >45 years during a period of very cold weather. This new system has utility for rapidly estimating excess mortality for other acute public health events such as extreme heat or cold weather. PMID:21439100

  10. Aerophagia : Excessive Air Swallowing Demonstrated by Esophageal Impedance Monitoring

    NARCIS (Netherlands)

    Hemmink, Gerrit J. M.; Weusten, Bas L. A. M.; Bredenoord, Albert J.; Timmer, Robin; Smout, Andre J. P. M.

    2009-01-01

    BACKGROUND & AIMS: Patients with aerophagia suffer from the presence of an excessive volume of intestinal gas, which is thought to result from excessive air ingestion. However, this has not been shown thus far. The aim of this study was therefore to assess swallowing and air swallowing frequencies i

  11. 12 CFR 740.3 - Advertising of excess insurance.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Advertising of excess insurance. 740.3 Section... ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.3 Advertising of excess insurance. Any advertising that mentions share or savings account insurance provided by a party other than the NCUA...

  12. A behavioral intervention to reduce excessive gestational weight gain

    Science.gov (United States)

    Excessive gestational weight gain (GWG) is a key modifiable risk factor for negative maternal and child health. We examined the efficacy of a behavioral intervention in preventing excessive GWG. 230 participants (87.8% Caucasian, mean age= 29.1 years; second parity) completed the 36 week gestational...

  13. 34 CFR Appendix A to Part 300 - Excess Costs Calculation

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Excess Costs Calculation A Appendix A to Part 300 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION... CHILDREN WITH DISABILITIES Pt. 300, App. A Appendix A to Part 300—Excess Costs Calculation Except...

  14. Business cycle fluctuations and excess sensitivity of private consumption

    OpenAIRE

    Gert Peersman; Lorenzo Pozzi

    2007-01-01

    We investigate whether business cycle fluctuations affect the degree of excess sensitivity of private consumption growth to disposable income growth. Using multivariate state space methods and quarterly US data for the period 1965-2000 we find that excess sensitivity is significantly higher during recessions.

  15. "excess Heat" Induced by Deuterium Flux in Palladium Film

    Science.gov (United States)

    Liu, Bin; Li, Xing Z.; Wei, Qing M.; Mueller, N.; Schoch, P.; Oehre, H.

    An early work at NASA, USA has repeated at INFICON Balzers, Liechtenstein in 2005. It is a confirmation of the correlation between excess heat and deuterium flux permeating through the Pd film. The maximum excess power density is of the order of 100 W/cm3 (Pd).

  16. 14 CFR 158.39 - Use of excess PFC revenue.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Use of excess PFC revenue. 158.39 Section...) AIRPORTS PASSENGER FACILITY CHARGES (PFC'S) Application and Approval § 158.39 Use of excess PFC revenue. (a) If the PFC revenue remitted to the public agency, plus interest earned thereon, exceeds the...

  17. 19 CFR 10.625 - Refunds of excess customs duties.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Refunds of excess customs duties. 10.625 Section 10.625 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... and Apparel Goods § 10.625 Refunds of excess customs duties. (a) Applicability. Section 205 of...

  18. A Practical Approach For Excess Bandwidth Distribution for EPONs

    KAUST Repository

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  19. Criminal Liability of Managers for Excessive Risk-Taking?

    NARCIS (Netherlands)

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its danger

  20. The Role of Alcohol Advertising in Excessive and Hazardous Drinking.

    Science.gov (United States)

    Atkin, Charles K.; And Others

    1983-01-01

    Examined the influence of advertising on excessive and dangerous drinking in a survey of 1,200 adolescents and young adults who were shown advertisements depicting excessive consumption themes. Results indicated that advertising stimulates consumption levels, which leads to heavy drinking and drinking in dangerous situations. (JAC)

  1. 26 CFR 1.162-8 - Treatment of excessive compensation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Treatment of excessive compensation. 1.162-8...-8 Treatment of excessive compensation. The income tax liability of the recipient in respect of an amount ostensibly paid to him as compensation, but not allowed to be deducted as such by the payor,...

  2. "Excess Demand Functions with Incomplete Markets — A Global Result"

    OpenAIRE

    Momi, Takeshi

    2002-01-01

    "The purpose of this paper is to give a global characterization of excess demand functions in a two period exchange economy with in-completenreal asset markets. We show that continuity, homogeneity and Walras’ law characterize the aggregate excess demand functions on any compact price set which maintains the dimension of the budget set."

  3. Excess demand functions with incomplete markets, a global result

    OpenAIRE

    Momi, Takeshi

    2002-01-01

    The purpose of this paper is to give a global characterization of excess demand functions in a two period exchange economy with incomplete real asset markets. We show that continuity, homogeneity and Walras� law characterize the aggregate excess demand functions on any compact price set which maintains the dimension of the budget set.

  4. Excess supply and low demand; Ueberangebot und Unterkonsum

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2010-04-15

    Actually there is an excess supply of natural gas in the market. Is this a sustainable development or a temporary phenomenon? Marc Hall, managing director Bayerngas GmbH (Munich, Federal Republic of Germany) does not hold the excess offer responsible, but the small demand in the industry due to the economic crisis for the plentifully existing amounts of natural gas.

  5. Mechanisms linking excess adiposity and carcinogenesis promotion

    Directory of Open Access Journals (Sweden)

    Ana I. Pérez-Hernández

    2014-05-01

    Full Text Available Obesity constitutes one of the most important metabolic diseases being associated to insulin resistance development and increased cardiovascular risk. Association between obesity and cancer has also been well-established for several tumor types, such as breast cancer in postmenopausal women, colorectal and prostate cancer. Cancer is the first death cause in developed countries and the second one in developing countries, with high incidence rates around the world. Furthermore, it has been estimated that 15-20% of all cancer deaths may be attributable to obesity. Tumor growth is regulated by interactions between tumor cells and their tissue microenvironment. In this sense, obesity may lead to cancer development through dysfunctional adipose tissue and altered signaling pathways. In this review, three main pathways relating obesity and cancer development are examined: i inflammatory changes leading to macrophage polarization and altered adipokine profile; ii insulin resistance development; and iii adipose tissue hypoxia. Since obesity and cancer present a high prevalence, the association between these conditions is of great public health significance and studies showing mechanisms by which obesity lead to cancer development and progression are needed to improve prevention and management of these diseases.

  6. ON INFRARED EXCESSES ASSOCIATED WITH Li-RICH K GIANTS

    Energy Technology Data Exchange (ETDEWEB)

    Rebull, Luisa M. [Spitzer Science Center (SSC) and Infrared Science Archive (IRSA), Infrared Processing and Analysis Center - IPAC, 1200 E. California Blvd., California Institute of Technology, Pasadena, CA 91125 (United States); Carlberg, Joleen K. [NASA Goddard Space Flight Center, Code 667, Greenbelt, MD 20771 (United States); Gibbs, John C.; Cashen, Sarah; Datta, Ashwin; Hodgson, Emily; Lince, Megan [Glencoe High School, 2700 NW Glencoe Rd., Hillsboro, OR 97124 (United States); Deeb, J. Elin [Bear Creek High School, 9800 W. Dartmouth Pl., Lakewood, CO 80227 (United States); Larsen, Estefania; Altepeter, Shailyn; Bucksbee, Ethan; Clarke, Matthew [Millard South High School, 14905 Q St., Omaha, NE 68137 (United States); Black, David V., E-mail: rebull@ipac.caltech.edu [Walden School of Liberal Arts, 4230 N. University Ave., Provo, UT 84604 (United States)

    2015-10-15

    Infrared (IR) excesses around K-type red giants (RGs) have previously been discovered using Infrared Astronomy Satellite (IRAS) data, and past studies have suggested a link between RGs with overabundant Li and IR excesses, implying the ejection of circumstellar shells or disks. We revisit the question of IR excesses around RGs using higher spatial resolution IR data, primarily from the Wide-field Infrared Survey Explorer. Our goal was to elucidate the link between three unusual RG properties: fast rotation, enriched Li, and IR excess. Our sample of RGs includes those with previous IR detections, a sample with well-defined rotation and Li abundance measurements with no previous IR measurements, and a large sample of RGs asserted to be Li-rich in the literature; we have 316 targets thought to be K giants, about 40% of which we take to be Li-rich. In 24 cases with previous detections of IR excess at low spatial resolution, we believe that source confusion is playing a role, in that either (a) the source that is bright in the optical is not responsible for the IR flux, or (b) there is more than one source responsible for the IR flux as measured in IRAS. We looked for IR excesses in the remaining sources, identifying 28 that have significant IR excesses by ∼20 μm (with possible excesses for 2 additional sources). There appears to be an intriguing correlation in that the largest IR excesses are all in Li-rich K giants, though very few Li-rich K giants have IR excesses (large or small). These largest IR excesses also tend to be found in the fastest rotators. There is no correlation of IR excess with the carbon isotopic ratio, {sup 12}C/{sup 13}C. IR excesses by 20 μm, though relatively rare, are at least twice as common among our sample of Li-rich K giants. If dust shell production is a common by-product of Li enrichment mechanisms, these observations suggest that the IR excess stage is very short-lived, which is supported by theoretical calculations. Conversely, the

  7. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  8. Deuterium excess in precipitation and its climatological significance

    International Nuclear Information System (INIS)

    The climatological significance of the deuterium excess parameter for tracing precipitation processes is discussed with reference to data collected within the IAEA/WMO Global Network for Isotopes in Precipitation (GNIP) programme. Annual and monthly variations in deuterium excess, and their primary relationships with δ18O, temperature, vapour pressure and relative humidity are used to demonstrate fundamental controls on deuterium excess for selected climate stations and transects. The importance of deuterium excess signals arising from ocean sources versus signals arising from air mass modification during transport over the continents is reviewed and relevant theoretical development is presented. While deuterium excess shows considerable promise as a quantitative index of precipitation processes, the effectiveness of current applications using GNIP is largely dependent on analytical uncertainty (∼2.1 per mille), which could be improved to better than 1 per mille through basic upgrades in routine measurement procedures for deuterium analysis. (author)

  9. Empirical formula for mass excess of heavy and superheavy nuclei

    Science.gov (United States)

    Manjunatha, H. C.; Chandrika, B. M.; Seenappa, L.

    2016-08-01

    A new empirical formula is proposed for mass excess of heavy and superheavy nuclei in the region Z = 96-129. The parameters of the formula are obtained by making a polynomial fit to the available theoretical and experimental data. The calculated mass excess values are compared with the experimental values and other results of the earlier proposed models such as finite range droplet model (FRDM) and Hartree-Fock-Bogoliubov (HFB) method. Standard deviation of calculated mass excess values for each atomic number is tabulated. The good agreement of present formula with the experiment and other models suggests that the present formula could be used to evaluate the mass excess values of heavy and superheavy nuclei in the region 96 ≤Z ≥129. This formula is a model-independent formula and is first of its kind that produces a mass excess values with the only simple inputs of only Z and A.

  10. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  11. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  12. Investigations on Accurate Analysis of Microstrip Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  13. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  14. Modeling Insights into Deuterium Excess as an Indicator of Water Vapor Source Conditions

    Science.gov (United States)

    Lewis, Sophie C.; Legrande, Allegra Nicole; Kelley, Maxwell; Schmidt, Gavin A.

    2013-01-01

    Deuterium excess (d) is interpreted in conventional paleoclimate reconstructions as a tracer of oceanic source region conditions, such as temperature, where precipitation originates. Previous studies have adopted co-isotopic approaches to estimate past changes in both site and oceanic source temperatures for ice core sites using empirical relationships derived from conceptual distillation models, particularly Mixed Cloud Isotopic Models (MCIMs). However, the relationship between d and oceanic surface conditions remains unclear in past contexts. We investigate this climate-isotope relationship for sites in Greenland and Antarctica using multiple simulations of the water isotope-enabled Goddard Institute for Space Studies (GISS) ModelE-R general circulation model and apply a novel suite of model vapor source distribution (VSD) tracers to assess d as a proxy for source temperature variability under a range of climatic conditions. Simulated average source temperatures determined by the VSDs are compared to synthetic source temperature estimates calculated using MCIM equations linking d to source region conditions. We show that although deuterium excess is generally a faithful tracer of source temperatures as estimated by the MCIM approach, large discrepancies in the isotope-climate relationship occur around Greenland during the Last Glacial Maximum simulation, when precipitation seasonality and moisture source regions were notably different from present. This identified sensitivity in d as a source temperature proxy suggests that quantitative climate reconstructions from deuterium excess should be treated with caution for some sites when boundary conditions are significantly different from the present day. Also, the exclusion of the influence of humidity and other evaporative source changes in MCIM regressions may be a limitation of quantifying source temperature fluctuations from deuterium excess in some instances.

  15. Warm Dust around Cool Stars: Field M Dwarfs with WISE 12 or 22 Micron Excess Emission

    CERN Document Server

    Theissen, Christopher A

    2014-01-01

    Using the SDSS DR7 spectroscopic catalog, we searched the WISE AllWISE catalog to investigate the occurrence of warm dust, as inferred from IR excesses, around field M dwarfs (dMs). We developed SDSS/WISE color selection criteria to identify 175 dMs (from 70,841) that show IR flux greater than typical dM photosphere levels at 12 and/or 22 $\\mu$m, including seven new stars within the Orion OB1 footprint. We characterize the dust populations inferred from each IR excess, and investigate the possibility that these excesses could arise from ultracool binary companions by modeling combined SEDs. Our observed IR fluxes are greater than levels expected from ultracool companions ($>3\\sigma$). We also estimate that the probability the observed IR excesses are due to chance alignments with extragalactic sources is $<$ 0.1%. Using SDSS spectra we measure surface gravity dependent features (K, Na, and CaH 3), and find $<$ 15% of our sample indicate low surface gravities. Examining tracers of youth (H$\\alpha$, UV fl...

  16. Accurate radiative transfer calculations for layered media.

    Science.gov (United States)

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  17. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  18. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  19. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  20. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  1. Excess science accommodation capabilities and excess performance capabilities assessment for Mars Geoscience and Climatology Orbiter: Extended study

    Science.gov (United States)

    Clark, K.; Flacco, A.; Kaskiewicz, P.; Lebsock, K.

    1983-01-01

    The excess science accommodation and excess performance capabilities of a candidate spacecraft bus for the Mars Geoscience and Climatology Orbiter MGCO mission are assessed. The appendices are included to support the conclusions obtained during this contract extension. The appendices address the mission analysis, the attitude determination and control, the propulsion subsystem, and the spacecraft configuration.

  2. Molar excess enthalpies at T = 298.15 K for (1-alkanol + dibutylether) systems

    Energy Technology Data Exchange (ETDEWEB)

    Mozo, Ismael; Garcia De La Fuente, Isaias [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain); Gonzalez, Juan Antonio, E-mail: jagl@termo.uva.e [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain); Cobos, Jose Carlos [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain)

    2010-01-15

    Molar excess enthalpies, H{sub m}{sup E}, at T = 298.15 K and atmospheric pressure have been measured using a microcalorimeter Tian-Calvet for the (methanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, 1-octanol, or 1-decanol + dibutylether) systems. Experimental results have been compared with those obtained from the ERAS, DISQUAC, and Dortmund UNIFAC models. DISQUAC and ERAS yield similar H{sub m}{sup E}, results. Larger differences between experimental and calculated H{sub m}{sup E}, values are obtained from UNIFAC. ERAS represents quite accurately the excess molar volumes, V{sub m}{sup E}, of these systems. The excess molar internal energy at constant volume, U{sub V,m}{sup E}, is nearly constant for the solutions with the longer 1-alkanols. This points out that the different interactional contributions to this magnitude are counterbalanced. Interactions between unlike molecules are stronger in methanol systems. The same behaviour is observed in mixtures with dipropylether.

  3. Diagnostic accuracy of the defining characteristics of the excessive fluid volume diagnosis in hemodialysis patients

    Directory of Open Access Journals (Sweden)

    Maria Isabel da Conceição Dias Fernandes

    2015-12-01

    Full Text Available Objective: to evaluate the accuracy of the defining characteristics of the excess fluid volume nursing diagnosis of NANDA International, in patients undergoing hemodialysis. Method: this was a study of diagnostic accuracy, with a cross-sectional design, performed in two stages. The first, involving 100 patients from a dialysis clinic and a university hospital in northeastern Brazil, investigated the presence and absence of the defining characteristics of excess fluid volume. In the second step, these characteristics were evaluated by diagnostic nurses, who judged the presence or absence of the diagnosis. To analyze the measures of accuracy, sensitivity, specificity, and positive and negative predictive values were calculated. Approval was given by the Research Ethics Committee under authorization No. 148.428. Results: the most sensitive indicator was edema and most specific were pulmonary congestion, adventitious breath sounds and restlessness. Conclusion: the more accurate defining characteristics, considered valid for the diagnostic inference of excess fluid volume in patients undergoing hemodialysis were edema, pulmonary congestion, adventitious breath sounds and restlessness. Thus, in the presence of these, the nurse may safely assume the presence of the diagnosis studied.

  4. The e-index, complementing the h-index for excess citations.

    Directory of Open Access Journals (Sweden)

    Chun-Ting Zhang

    Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.

  5. 3D maps of the local ISM from inversion of individual color excess measurements

    CERN Document Server

    Lallement, Rosine; Valette, Bernard; Puspitarini, Lucky; Eyer, Laurent; Casagrande, Luca

    2013-01-01

    Three-dimensional (3D) maps of the Galactic interstellar matter (ISM) are a potential tool of wide use, however accurate and detailed maps are still lacking. One of the ways to construct the maps is to invert individual distance-limited ISM measurements, a method we have here applied to measurements of stellar color excess in the optical. We have assembled color excess data together with the associated parallax or photometric distances to constitute a catalog of ~ 23,000 sightlines for stars within 2.5 kpc. The photometric data are taken from Stromgren catalogs, the Geneva photometric database, and the Geneva-Copenhagen survey. We also included extinctions derived towards open clusters. We applied, to this color excess dataset, an inversion method based on a regularized Bayesian approach, previously used for mapping at closer distances. We show the dust spatial distribution resulting from the inversion by means of planar cuts through the differential opacity 3D distribution, and by means of 2D maps of the int...

  6. Land application for disposal of excess water: an overview

    International Nuclear Information System (INIS)

    Water management is an important factor in the operation of uranium mines in the Alligator Rivers Region, located in the Wet-Dry tropics. For many project designs, especially open cut operations, sole reliance on evaporative disposal of waste water is ill-advised in years where the Wet season is above average. Instead, spray irrigation, or the application of excess water to suitable areas of land, has been practised at both Nabarlek and Ranger. The method depends on water losses by evaporation from spray droplets, from vegetation surfaces and from the ground surface; residual water is carried to the groundwater system by percolation. The solutes are largely transferred to the soils where heavy metals and metallic radionuclides attach to particles in the soil profile with varying efficiency depending on soil type. Major solutes that can occur in waste water from uranium mines are less successfully immobilised in soil. Sulphate is essentially conservative and not bound within the soil profile; ammonia is affected by soil reactions leading to its decomposition. The retrospective viewpoint of history indicates the application of a technology inadequately researched for local conditions. The consequences at Nabarlek have been the death of trees on one application area and the creation of contaminated groundwater which has moved into the biosphere down gradient and affected the ecology of a local stream. At Ranger, the outcome of land application has been less severe in the short term but the effective adsorption of radionuclides in surface soils has lead to dose estimates which will necessitate restrictions on future public access unless extensive rehabilitation is carried out. 2 refs., 1 tab

  7. Excess of (236)U in the northwest Mediterranean Sea.

    Science.gov (United States)

    Chamizo, E; López-Lora, M; Bressac, M; Levy, I; Pham, M K

    2016-09-15

    In this work, we present first (236)U results in the northwestern Mediterranean. (236)U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25'N, 07°52'E). The obtained (236)U/(238)U atom ratios in the dissolved phase, ranging from about 2×10(-9) at 100m depth to about 1.5×10(-9) at 2350m depth, indicate that anthropogenic (236)U dominates the whole seawater column. The corresponding deep-water column inventory (12.6ng/m(2) or 32.1×10(12) atoms/m(2)) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5ng/m(2) or 13×10(12) atoms/m(2)), evidencing the influence of local or regional (236)U sources in the western Mediterranean basin. On the other hand, the input of (236)U associated to Saharan dust outbreaks is evaluated. An additional (236)U annual deposition of about 0.2pg/m(2) based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that (236)U stays in solution in seawater. Overall, this source accounts for about 0.1% of the (236)U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin. PMID:27262827

  8. Prevalence of excessive screen time and associated factors in adolescents

    Directory of Open Access Journals (Sweden)

    Joana Marcela Sales de Lucena

    2015-12-01

    Full Text Available Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color, physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1 and it was higher in males (84.3% compared to females (76.1%; p<0.001. In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure.

  9. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  10. A Bayesian Framework for Combining Valuation Estimates

    CERN Document Server

    Yee, Kenton K

    2007-01-01

    Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.

  11. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  12. Plant diversity accurately predicts insect diversity in two tropical landscapes.

    Science.gov (United States)

    Zhang, Kai; Lin, Siliang; Ji, Yinqiu; Yang, Chenxue; Wang, Xiaoyang; Yang, Chunyan; Wang, Hesheng; Jiang, Haisheng; Harrison, Rhett D; Yu, Douglas W

    2016-09-01

    Plant diversity surely determines arthropod diversity, but only moderate correlations between arthropod and plant species richness had been observed until Basset et al. (Science, 338, 2012 and 1481) finally undertook an unprecedentedly comprehensive sampling of a tropical forest and demonstrated that plant species richness could indeed accurately predict arthropod species richness. We now require a high-throughput pipeline to operationalize this result so that we can (i) test competing explanations for tropical arthropod megadiversity, (ii) improve estimates of global eukaryotic species diversity, and (iii) use plant and arthropod communities as efficient proxies for each other, thus improving the efficiency of conservation planning and of detecting forest degradation and recovery. We therefore applied metabarcoding to Malaise-trap samples across two tropical landscapes in China. We demonstrate that plant species richness can accurately predict arthropod (mostly insect) species richness and that plant and insect community compositions are highly correlated, even in landscapes that are large, heterogeneous and anthropogenically modified. Finally, we review how metabarcoding makes feasible highly replicated tests of the major competing explanations for tropical megadiversity. PMID:27474399

  13. Spitzer 24 um Excesses for Bright Galactic Stars in Bootes and First Look Survey Fields

    CERN Document Server

    Hovhannisyan, L R; Weedman, D W; Le Floc'h, E; Houck, J R; Soifer, B T; Brand, K; Dey, A; Jannuzi, B T

    2009-01-01

    Optically bright Galactic stars (V 1 mJy are identified in Spitzer mid-infrared surveys within 8.2 square degrees for the Bootes field of the NOAO Deep Wide-Field Survey and within 5.5 square degrees for the First Look Survey (FLS). 128 stars are identified in Bootes and 140 in the FLS, and their photometry is given. (K-[24]) colors are determined using K magnitudes from the 2MASS survey for all stars in order to search for excess 24 um luminosity compared to that arising from the stellar photosphere. Of the combined sample of 268 stars, 141 are of spectral types F, G, or K, and 17 of these 141 stars have 24 um excesses with (K-[24]) > 0.2 mag. Using limits on absolute magnitude derived from proper motions, at least 8 of the FGK stars with excesses are main sequence stars, and estimates derived from the distribution of apparent magnitudes indicate that all 17 are main sequence stars. These estimates lead to the conclusion that between 9% and 17% of the main sequence FGK field stars in these samples have 24 u...

  14. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Directory of Open Access Journals (Sweden)

    Wen-Chung Lee

    Full Text Available Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk. The excess relative risk (ERR used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  15. Excess relative risk as an effect measure in case-control studies of rare diseases.

    Science.gov (United States)

    Lee, Wen-Chung

    2014-01-01

    Epidemiologists often use ratio-type indices (rate ratio, risk ratio and odds ratio) to quantify the association between exposure and disease. By comparison, less attention has been paid to effect measures on a difference scale (excess rate or excess risk). The excess relative risk (ERR) used primarily by radiation epidemiologists is of peculiar interest here, in that it involves both difference and ratio operations. The ERR index (but not the difference-type indices) is estimable in case-control studies. Using the theory of sufficient component cause model, the author shows that when there is no mechanistic interaction (no synergism in the sufficient cause sense) between the exposure under study and the stratifying variable, the ERR index (but not the ratio-type indices) in a rare-disease case-control setting should remain constant across strata and can therefore be regarded as a common effect parameter. By exploiting this homogeneity property, the related attributable fraction indices can also be estimated with greater precision. The author demonstrates the methodology (SAS codes provided) using a case-control dataset, and shows that ERR preserves the logical properties of the ratio-type indices. In light of the many desirable properties of the ERR index, the author advocates its use as an effect measure in case-control studies of rare diseases.

  16. Mortality Attributable to Excess Body Mass Index in Iran: Implementation of the Comparative Risk Assessment Methodology

    Science.gov (United States)

    Djalalinia, Shirin; Moghaddam, Sahar Saeedi; Peykari, Niloofar; Kasaeian, Amir; Sheidaei, Ali; Mansouri, Anita; Mohammadi, Younes; Parsaeian, Mahboubeh; Mehdipour, Parinaz; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Background: The prevalence of obesity continues to rise worldwide with alarming rates in most of the world countries. Our aim was to compare the mortality of fatal disease attributable to excess body mass index (BMI) in Iran in 2005 and 2011. Methods: Using standards implementation comparative risk assessment methodology, we estimated mortality attributable to excess BMI in Iranian adults of 25–65 years old, at the national and sub-national levels for 9 attributable outcomes including; ischemic heart diseases (IHDs), stroke, hypertensive heart diseases, diabetes mellitus (DM), colon cancer, cancer of the body of the uterus, breast cancer, kidney cancer, and pancreatic cancer. Results: In 2011, in adults of 25–65 years old, at the national level, excess BMI was responsible for 39.5% of total deaths that were attributed to 9 BMI paired outcomes. From them, 55.0% were males. The highest mortality was attributed to IHD (55.7%) which was followed by stroke (19.3%), and DM (12.0%). Based on the population attributed fractions estimations of 2011, except for colon cancer, the remaining 6 common outcomes were higher for women than men. Conclusions: Despite the priority of the problem, there is currently no comprehensive program to prevention or control obesity in Iran. The present results show a growing need to comprehensive implications for national and sub-national health policies and interventional programs in Iran. PMID:26644906

  17. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  18. Distribution system state estimation

    Science.gov (United States)

    Wang, Haibin

    With the development of automation in distribution systems, distribution SCADA and many other automated meters have been installed on distribution systems. Also Distribution Management System (DMS) have been further developed and more sophisticated. It is possible and useful to apply state estimation techniques to distribution systems. However, distribution systems have many features that are different from the transmission systems. Thus, the state estimation technology used in the transmission systems can not be directly used in the distribution systems. This project's goal was to develop a state estimation algorithm suitable for distribution systems. Because of the limited number of real-time measurements in the distribution systems, the state estimator can not acquire enough real-time measurements for convergence, so pseudo-measurements are necessary for a distribution system state estimator. A load estimation procedure is proposed which can provide estimates of real-time customer load profiles, which can be treated as the pseudo-measurements for state estimator. The algorithm utilizes a newly installed AMR system to calculate more accurate load estimations. A branch-current-based three-phase state estimation algorithm is developed and tested. This method chooses the magnitude and phase angle of the branch current as the state variable, and thus makes the formulation of the Jacobian matrix less complicated. The algorithm decouples the three phases, which is computationally efficient. Additionally, the algorithm is less sensitive to the line parameters than the node-voltage-based algorithms. The algorithm has been tested on three IEEE radial test feeders, both the accuracy and the convergence speed. Due to economical constraints, the number of real-time measurements that can be installed on the distribution systems is limited. So it is important to decide what kinds of measurement devices to install and where to install them. Some rules of meter placement based

  19. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  20. How accurate are SuperCOSMOS positions?

    CERN Document Server

    Schaefer, Adam; Johnston, Helen

    2014-01-01

    Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than $B_J=20$ and Galactic latitudes $|b|>10^{\\circ}$. Position differences showed an rms scatter of $0.16''$ in right ascension and declination. While overall systematic offsets were $<0.1''$ in each hemisphere, both the systematics and scatter were greater in the north.

  1. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  2. Accurate Telescope Mount Positioning with MEMS Accelerometers

    CERN Document Server

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  3. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  4. A Novel Algorithm for the Estimation of the Surfactant Surface Excess at Emulsion Interfaces

    OpenAIRE

    Urbina-Villalba, German; Di Scipio, Sabrina; Garcia-Valera, Neyda

    2014-01-01

    When the theoretical values of the interfacial tension -resulting from the homogeneous distribution of ionic surfactant molecules amongst the interface of emulsion drops- are plotted against the total surfactant concentration, they produce a curve comparable to the Gibbs adsorption isotherm. However, the actual isotherm takes into account the solubility of the surfactant in the aqueous bulk phase. Hence, assuming that the total surfactant population is only distributed among the available oil...

  5. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  6. Suzaku observations of X-ray excess emission in the cluster of galaxies A 3112

    Science.gov (United States)

    Lehto, T.; Nevalainen, J.; Bonamente, M.; Ota, N.; Kaastra, J.

    2010-12-01

    Aims: We analysed the Suzaku XIS1 data of the A 3112 cluster of galaxies in order to examine the X-ray excess emission in this cluster reported earlier with the XMM-Newton and Chandra satellites. Methods: We performed X-ray spectroscopy on the data of a single large region. We carried out simulations to estimate the systematic uncertainties affecting the X-ray excess signal. Results: The best-fit temperature of the intracluster gas depends strongly on the choice of the energy band used for the spectral analysis. This proves the existence of excess emission component in addition to the single-temperature MEKAL in A 3112. We showed that this effect is not an artifact due to uncertainties of the background modeling, instrument calibration or the amount of Galactic absorption. Neither does the PSF scatter of the emission from the cool core nor the projection of the cool gas in the cluster outskirts produce the effect. Finally we modeled the excess emission either by using an additional MEKAL or powerlaw component. Due to the small differencies between thermal and non-thermal model we can not rule out the non-thermal origin of the excess emission based on the goodness of the fit. Assuming that it has a thermal origin, we further examined the differential emission measure (DEM) models. We utilised two different DEM models, a Gaussian differential emission measure distribution (GDEM) and WDEM model, where the emission measure of a number of thermal components is distributed as a truncated power law. The best-fit XIS1 MEKAL temperature for the 0.4-7.0 keV band is 4.7 ± 0.1 keV, consistent with that obtained using GDEM and WDEM models.

  7. Protective effect of D-ribose against inhibition of rats testes function at excessive exercise

    Directory of Open Access Journals (Sweden)

    Chigrinskiy E.A.

    2011-09-01

    Full Text Available An increasing number of research studies point to participation in endurance exercise training as having significant detrimental effects upon reproductive hormonal profiles in men. The means used for prevention and correction of fatigue are ineffective for sexual function recovery and have contraindications and numerous side effects. The search for substances effectively restoring body functions after overtraining and at the same time sparing the reproductive function, which have no contraindications precluding their long and frequent use, is an important trend of studies. One of the candidate substances is ribose used for correction of fatigue in athletes engaged in some sports.We studied the role of ribose deficit in metabolism of the testes under conditions of excessive exercise and the potentialities of ribose use for restoration of the endocrine function of these organs.45 male Wistar rats weighing 240±20 g were used in this study. Animals were divided into 3 groups (n=15: control; excessive exercise; excessive exercise and received ribose treatment. Plasma concentrations of lactic, β-hydroxybutyric, uric acids, luteinizing hormone, total and free testosterone were measured by biochemical and ELISA methods. The superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase and glucose-6-phosphate dehydrogenase activities and uric acids, malondialdehyde, glutathione, ascorbic acids, testosterone levels were estimated in the testes sample.Acute disorders of purine metabolism develop in rat testes under conditions of excessive exercise. These disorders are characterized by enhanced catabolism and reduced reutilization of purine mononucleotides and activation of oxidative stress against the background of reduced activities of the pentose phosphate pathway and antioxidant system. Administration of D-ribose to rats subjected to excessive exercise improves purine reutilization, stimulates the pentose phosphate pathway work

  8. Characteristics of adolescent excessive drinkers compared with consumers and abstainers

    NARCIS (Netherlands)

    Tomcikova, Zuzana; Geckova, Andrea Madarasova; van Dijk, Jitse P.; Reijneveld, Sijmen A.

    2011-01-01

    Introduction and Aims. This study aimed at comparing adolescent abstainers, consumers and excessive drinkers in terms of family characteristics (structure of family, socioeconomic factors), perceived social support, personality characteristics (extraversion, self-esteem, aggression) and well-being.

  9. Excess Molar Volume of Binary Systems Containing Mesitylene

    Directory of Open Access Journals (Sweden)

    Morávková, L.

    2013-05-01

    Full Text Available This paper presents a review of density measurements for binary systems containing 1,3,5-trimethylbenzene (mesitylene with a variety of organic compounds at atmospheric pressure. Literature data of the binary systems were divided into nine basic groups by the type of contained organic compound with mesitylene. The excess molar volumes calculated from the experimental density values have been compared with literature data. Densities were measured by a few experimental methods, namely using a pycnometer, a dilatometer or a commercial apparatus. The overview of the experimental data and shape of the excess molar volume curve versus mole fraction is presented in this paper. The excess molar volumes were correlated by Redlich–Kister equation. The standard deviations for fitting of excess molar volume versus mole fraction are compared. Found literature data cover a huge temperature range from (288.15 to 343.15 K.

  10. Iodine deficiency and iodine excess in Jiangsu Province, China

    NARCIS (Netherlands)

    Zhao, J.

    2001-01-01

    Keywords:iodine deficiency, iodine excess, endemic goiter, drinking water, iodine intake, thyroid function, thyroid size, iodized salt, iodized oil, IQ, physical development, hearing capacity, epidemiology, meta-analysis, IDD, randomized trial, intervention, USA, Bangladesh, ChinaEndemic goiter can

  11. Gene Linked to Excess Male Hormones in Female Infertility Disorder

    Science.gov (United States)

    ... News Releases News Release Tuesday, April 15, 2014 Gene linked to excess male hormones in female infertility ... form cyst-like structures. A variant in a gene active in cells of the ovary may lead ...

  12. 7 CFR 929.104 - Outlets for excess cranberries.

    Science.gov (United States)

    2010-01-01

    ... nonhuman food use. (4) Research and development projects approved by the committee dealing with the... drying, or freezing of cranberries. (b) Excess cranberries may not be converted into canned, frozen,...

  13. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  14. Variables of excessive computer internet use in childhood and adolescence

    OpenAIRE

    Thalemann, Ralf

    2010-01-01

    The aim of this doctoral thesis is the characterization of excessive computer and video gaming in terms of a behavioral addiction. Therefore, the development of a diagnostic psychometric instrument was central to differentiate between normal and pathological computer gaming in adolescence. In study 1, 323 children were asked about their video game playing behavior to assess the prevalence of pathological computer gaming. Data suggest that excessive computer and video game players use thei...

  15. Asymmetric Dark Matter Models and the LHC Diphoton Excess

    DEFF Research Database (Denmark)

    Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    The existence of dark matter (DM) and the origin of the baryon asymmetry are persistent indications that the SM is incomplete. More recently, the ATLAS and CMS experiments have observed an excess of diphoton events with invariant mass of about 750 GeV. One interpretation of this excess is decays...... have for models of asymmetric DM that attempt to account for the similarity of the dark and visible matter abundances....

  16. ATLAS on-Z excess through vector-like quarks

    Science.gov (United States)

    Endo, Motoi; Takaesu, Yoshitaro

    2016-07-01

    We investigate the possibility that the excess observed in the leptonic- Z + jets +E̸T ATLAS SUSY search is due to productions of vector-like quarks U, which decay to the first-generation quarks and Z bosons. We find that the excess can be explained within the 2σ (up to 1.4σ) level with satisfying the constraints from the other LHC searches. The mass and branching ratio are 610 0.3- 0.45, respectively.

  17. When does the mean excess plot look linear?

    CERN Document Server

    Ghosh, Souvik

    2010-01-01

    In risk analysis, the mean excess plot is a commonly used exploratory plotting technique for confirming iid data is consistent with a generalized Pareto assumption for the underlying distribution, since in the presence of such a distribution thresholded data have a mean excess plot that is roughly linear. Does any other class of distributions share this linearity of the plot? Under some extra assumptions, we are able to conclude that only the generalized Pareto family has this property.

  18. Fetal Programming of Obesity: Maternal Obesity and Excessive Weight Gain

    OpenAIRE

    Seray Kabaran

    2014-01-01

    The prevalence of obesity is an increasing health problem throughout the world. Maternal pre-pregnancy weight, maternal nutrition and maternal weight gain are among the factors that can cause childhood obesity. Both maternal obesity and excessive weight gain increase the risks of excessive fetal weight gain and high birth weight. Rapid weight gain during fetal period leads to changes in the newborn body composition. Specifically, the increase in body fat ratio in the early periods is associat...

  19. Excess turnover and employment growth: firm and match heterogeneity

    OpenAIRE

    Centeno, Mário; Machado, Carla; Novo, Álvaro A.

    2009-01-01

    Portuguese firms engage in intense reallocation, most employers simultaneously hire and separate from workers, resulting in a large heterogeneity of flows and excess turnover. Large and older firms have lower flows, but high excess turnover rates. In small firms, hires and separations move symmetrically during expansion and contraction periods, on the contrary, large firms adjust their employment levels by reducing entry and not by increasing separations. Most hires and separations are on fix...

  20. Excess pore water pressure induced in the foundation of a tailings dyke at Muskeg River Mine, Fort McMurray

    Energy Technology Data Exchange (ETDEWEB)

    Eshraghian, A.; Martens, S. [Klohn Crippen Berger Ltd., Calgary, AB (Canada)

    2010-07-01

    This paper discussed the effect of staged construction on the generation and dissipation of excess pore water pressure within the foundation clayey units of the External Tailings Facility dyke. Data were compiled from piezometers installed within the dyke foundation and used to estimate the dissipation parameters for the clayey units for a selected area of the foundation. Spatial and temporal variations in the pore water pressure generation parameters were explained. Understanding the process by which excess pore water pressure is generated and dissipates is critical to optimizing dyke design and performance. Piezometric data was shown to be useful in improving estimates of the construction-induced pore water pressure and dissipation rates within the clay layers in the foundation during dyke construction. In staged construction, a controlled rate of load application is used to increase foundation stability. Excess pore water pressure dissipates after each application, so the most critical stability condition happens after each load. Slow loading allows dissipation, whereas previous load pressure remains during fast loading. The dyke design must account for the rate of loading and the rate of pore pressure dissipation. Controlling the rate of loading and the rate of stress-induced excess pore water pressure generation is important to dyke stability during construction. Effective stress-strength parameters for the foundation require predictions of the pore water pressure induced during staged construction. It was found that both direct and indirect loading generates excess pore water pressure in the foundation clays. 2 refs., 2 tabs., 11 figs.

  1. EEG-derived estimators of present and future cognitive performance

    Directory of Open Access Journals (Sweden)

    Maja eStikic

    2011-08-01

    Full Text Available Previous EEG-based fatigue-related research primarily focused on the association between concurrent cognitive performance and time-locked physiology. The goal of this study was to investigate the capability of EEG to assess the impact of fatigue on both present and future cognitive performance during a 20min sustained attention task, the 3-Choice Active Vigilance Task (3CVT, that requires subjects to discriminate one primary target from two secondary non-target geometric shapes. The current study demonstrated the ability of EEG to estimate not only present, but also future cognitive performance, utilizing a single, combined reaction time and accuracy performance metric. The correlations between observed and estimated performance, for both present and future performance, were strong (up to 0.89 and 0.79, respectively. The models were able to consistently estimate unacceptable performance throughout the entire 3CVT, i.e., excessively missed responses and/or slow reaction times, while acceptable performance was recognized less accurately later in the task. The developed models were trained on a relatively large dataset (n=50 subjects to increase stability. Cross-validation results suggested the models were not over-fitted. This study indicates that EEG can be used to predict gross-performance degradations 5 to 15min in advance.

  2. Influenza excess mortality from 1950-2000 in tropical Singapore.

    Directory of Open Access Journals (Sweden)

    Vernon J Lee

    Full Text Available INTRODUCTION: Tropical regions have been shown to exhibit different influenza seasonal patterns compared to their temperate counterparts. However, there is little information about the burden of annual tropical influenza epidemics across time, and the relationship between tropical influenza epidemics compared with other regions. METHODS: Data on monthly national mortality and population was obtained from 1947 to 2003 in Singapore. To determine excess mortality for each month, we used a moving average analysis for each month from 1950 to 2000. From 1972, influenza viral surveillance data was available. Before 1972, information was obtained from serial annual government reports, peer-reviewed journal articles and press articles. RESULTS: The influenza pandemics of 1957 and 1968 resulted in substantial mortality. In addition, there were 20 other time points with significant excess mortality. Of the 12 periods with significant excess mortality post-1972, only one point (1988 did not correspond to a recorded influenza activity. For the 8 periods with significant excess mortality periods before 1972 excluding the pandemic years, 2 years (1951 and 1953 had newspaper reports of increased pneumonia deaths. Excess mortality could be observed in almost all periods with recorded influenza outbreaks but did not always exceed the 95% confidence limits of the baseline mortality rate. CONCLUSION: Influenza epidemics were the likely cause of most excess mortality periods in post-war tropical Singapore, although not every epidemic resulted in high mortality. It is therefore important to have good influenza surveillance systems in place to detect influenza activity.

  3. Accurate geolocation of rfi sources in smos imagery based on superresolution algorithms

    OpenAIRE

    Hyuk, Park; Camps Carmona, Adriano José; González Gambau, Veronica

    2014-01-01

    Accurate geolocation of SMOS RFI sources is very important for effectively switching-off the illegal emitter in the protected L-band. We present a novel approach for the geolocation of SMOS RFI sources, based on the Direction of Arrival (DOA) estimation techniques normally used in the sensor array. The MUSIC DOA estimation algorithm is tailored for SMOS RFI source detection. In the test results, the proposed MUSIC method shows improved performance in terms of angular/spatial resolution.

  4. How Dusty Is Alpha Centauri? Excess or Non-excess over the Infrared Photospheres of Main-sequence Stars

    Science.gov (United States)

    Wiegert, J.; Liseau, R.; Thebault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; Augereau, J. C.; Aran, A. Bayo; Danchi, W. C.; del Burgo, C.; Ertel, S.; Fridlund, M. C. W.; Hajigholi, M.; Krivov, A. V.; Pilbratt, G. L.; Roberge, A.; White, G. J.; Wolf, S.

    2014-01-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims. We aim to determine the level of emission from debris around the stars in the Cen system. This requires knowledge of their photospheres.Having already detected the temperature minimum, Tmin, of CenA at far-infrared wavelengths, we here attempt to do the same for the moreactive companion Cen B. Using the Cen stars as templates, we study the possible eects that Tmin may have on the detectability of unresolveddust discs around other stars. Methods.We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in thefar infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunctionwith radiative transfer calculations, were used to estimate the amount of debris around these stars. Results. For solar-type stars more distant than Cen, a fractional dust luminosity fd LdustLstar 2 107 could account for SEDs that do not exhibit the Tmin eect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared,slight excesses at the 2:5 level are observed at 24 m for both CenA and B, which, if interpreted as due to zodiacal-type dust emission, wouldcorrespond to fd (13) 105, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dustgrains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the Cen stars, viz.4106 M$ of 4 to 1000 msize grains, distributed according to n(a) a3:5. Similarly, for filled-in Tmin

  5. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  6. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  7. Association between inaccurate estimation of body size and obesity in schoolchildren

    Directory of Open Access Journals (Sweden)

    Larissa da Cunha Feio Costa

    2015-12-01

    Full Text Available Objectives: To investigate the prevalence of inaccurate estimation of own body size among Brazilian schoolchildren of both sexes aged 7-10 years, and to test whether overweight/obesity; excess body fat and central obesity are associated with inaccuracy. Methods: Accuracy of body size estimation was assessed using the Figure Rating Scale for Brazilian Children. Multinomial logistic regression was used to analyze associations. Results: The overall prevalence of inaccurate body size estimation was 76%, with 34% of the children underestimating their body size and 42% overestimating their body size. Obesity measured by body mass index was associated with underestimation of body size in both sexes, while central obesity was only associated with overestimation of body size among girls. Conclusions: The results of this study suggest there is a high prevalence of inaccurate body size estimation and that inaccurate estimation is associated with obesity. Accurate estimation of own body size is important among obese schoolchildren because it may be the first step towards adopting healthy lifestyle behaviors.

  8. Excess Barium as a Paleoproductivity Proxy: A Reevaluation

    Science.gov (United States)

    Eagle, M.; Paytan, A.

    2001-12-01

    Marine barite may serve as a proxy to reconstruct past export production (Dymond, 1992). In most studies sedimentary barite accumulation is not measured directly, instead a parameter termed excess barium (Baexs), also referred to as biogenic barium, is used to estimate the barite content. Baexs is defined as the total Ba concentration in the sediment minus the Ba associated with terrigenous material. Baexs is calculated by normalization to a constant Ba/Al ratio, typically the average shale ratio. This application assumes that (1) all the Ba besides the fraction associated with terrigenous Al is in the form of barite (the phase related to productivity) (2) the Ba/Alshale is constant in space and time (3) all of the Al is associated with terrigenous matter. If these assumptions are invalidated however, this approach lead to significant errors in calculating export production rates. To test the validity of the use of Baexs as a proxy for barite we compared the Baexs in a wide range of core top sediments from different oceanic settings to the barite content in the same cores. We found that Baexs frequently overestimated the Ba fraction associated with barite and in several cases significant Baexs was measured in the cores where no barite was observed. We have also used a sequential leaching protocol (Collier and Edmond 1984) to determine Ba association with organic matter, carbonates, Fe-Mn hydroxides and silicates. While terrigenous Ba remains an important fraction, in our samples 25-95% of non-barite Ba was derived from other fractions, with Fe-Mn oxides contributing the most Ba. In addition we found that the Ba/Al ratio in the silicate fraction of our samples varied considerably from site to site. The above results suggest that at least two of the underlying assumptions for employing Baexs to reconstruct paleoproductivity are not always valid and previously published data from (Murray and Leinen 1993) indicate that the third assumption may also not hold in every

  9. Excess mortality in the Soviet Union: a reconsideration of the demographic consequences of forced industrialization 1929-1949.

    Science.gov (United States)

    Rosefielde, S

    1983-01-01

    A reconsideration of the extent of excess mortality resulting from the policy of forced industrialization in the USSR between 1929 and 1949 is presented. The study is based on recently published, adjusted serial data on natality in the 1930s and on data from the suppressed census of 1937. These data suggest that excess mortality due to Stalin's policies, including the forced labor camp system, may have involved a minimum of 12.6 million and a maximum of more than 23.5 million deaths. Various alternative estimates using different methods and data sources are compared.

  10. Accurate mass error correction in liquid chromatography time-of-flight mass spectrometry based metabolomics

    NARCIS (Netherlands)

    Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.

    2008-01-01

    Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals

  11. Fast and Accurate Computation Tools for Gravitational Waveforms from Binary Sistems with any Orbital Eccentricity

    CERN Document Server

    Pierro, V; Spallicci, A D; Laserra, E; Recano, F

    2001-01-01

    The relevance of orbital eccentricity in the detection of gravitational radiation from (steady state) binary stars is emphasized. Computationnally effective fast and accurate)tools for constructing gravitational wave templates from binary stars with any orbital eccentricity are introduced, including tight estimation criteria of the pertinent truncation and approximation errors.

  12. Battery Management Systems: Accurate State-of-Charge Indication for Battery-Powered Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Danilov, D.; Regtien, P.P.L.; Notten, P.H.L.

    2008-01-01

    Battery Management Systems – Universal State-of-Charge indication for portable applications describes the field of State-of-Charge (SoC) indication for rechargeable batteries. With the emergence of battery-powered devices with an increasing number of power-hungry features, accurately estimating the

  13. Are National HFC Inventory Reports Accurate?

    Science.gov (United States)

    Lunt, M. F.; Rigby, M. L.; Ganesan, A.; Manning, A.; O'Doherty, S.; Prinn, R. G.; Saito, T.; Harth, C. M.; Muhle, J.; Weiss, R. F.; Salameh, P.; Arnold, T.; Yokouchi, Y.; Krummel, P. B.; Steele, P.; Fraser, P. J.; Li, S.; Park, S.; Kim, J.; Reimann, S.; Vollmer, M. K.; Lunder, C. R.; Hermansen, O.; Schmidbauer, N.; Young, D.; Simmonds, P. G.

    2014-12-01

    Hydrofluorocarbons (HFCs) were introduced as replacements for ozone depleting chlorinated gases due to their negligible ozone depletion potential. As a result, these potent greenhouse gases are now rapidly increasing in atmospheric mole fraction. However, at present, less than 50% of HFC emissions, as inferred from models combined with atmospheric measurements (top-down methods), can be accounted for by the annual national reports to the United Nations Framework Convention on Climate Change (UNFCCC). There are at least two possible reasons for the discrepancy. Firstly, significant emissions could be originating from countries not required to report to the UNFCCC ("non-Annex 1" countries). Secondly, emissions reports themselves may be subject to inaccuracies. For example the HFC emission factors used in the 'bottom-up' calculation of emissions tend to be technology-specific (refrigeration, air conditioning etc.), but not tuned to the properties of individual HFCs. To provide a new top-down perspective, we inferred emissions using high frequency HFC measurements from the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Institute for Environmental Studies (NIES) networks. Global and regional emissions information was inferred from these measurements using a coupled Eulerian and Lagrangian system, based on NCAR's MOZART model and the UK Met Office NAME model. Uncertainties in this measurement and modelling framework were investigated using a hierarchical Bayesian inverse method. Global and regional emissions estimates for five of the major HFCs (HFC-134a, HFC-125, HFC-143a, HFC-32, HFC-152a) from 2004-2012 are presented. It was found that, when aggregated, the top-down estimates from Annex 1 countries agreed remarkably well with the reported emissions, suggesting the non-Annex 1 emissions make up the difference with the top-down global estimate. However, when these HFC species are viewed individually we find that emissions of HFC-134a are over

  14. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  16. Accurate particle position measurement from images

    CERN Document Server

    Feng, Yan; Liu, Bin; 10.1063/1.2735920

    2011-01-01

    The moment method is an image analysis technique for sub-pixel estimation of particle positions. The total error in the calculated particle position includes effects of pixel locking and random noise in each pixel. Pixel locking, also known as peak locking, is an artifact where calculated particle positions are concentrated at certain locations relative to pixel edges. We report simulations to gain an understanding of the sources of error and their dependence on parameters the experimenter can control. We suggest an algorithm, and we find optimal parameters an experimenter can use to minimize total error and pixel locking. Simulating a dusty plasma experiment, we find that a sub-pixel accuracy of 0.017 pixel or better can be attained. These results are also useful for improving particle position measurement and particle tracking velocimetry (PTV) using video microscopy, in fields including colloids, biology, and fluid mechanics.

  17. Occurrence of invasive pneumococcal disease and number of excess cases due to influenza

    Directory of Open Access Journals (Sweden)

    Penttinen Pasi

    2006-03-01

    Full Text Available Abstract Background Influenza is characterized by seasonal outbreaks, often with a high rate of morbidity and mortality. It is also known to be a cause of significant amount secondary bacterial infections. Streptococcus pneumoniae is the main pathogen causing secondary bacterial pneumonia after influenza and subsequently, influenza could participate in acquiring Invasive Pneumococcal Disease (IPD. Methods In this study, we aim to investigate the relation between influenza and IPD by estimating the yearly excess of IPD cases due to influenza. For this purpose, we use influenza periods as an indicator for influenza activity as a risk factor in subsequent analysis. The statistical modeling has been made in two modes. First, we constructed two negative binomial regression models. For each model, we estimated the contribution of influenza in the models, and calculated number of excess number of IPD cases. Also, for each model, we investigated several lag time periods between influenza and IPD. Secondly, we constructed an "influenza free" baseline, and calculated differences in IPD data (observed cases and baseline (expected cases, in order to estimate a yearly additional number of IPD cases due to influenza. Both modes were calculated using zero to four weeks lag time. Results The analysis shows a yearly increase of 72–118 IPD cases due to influenza, which corresponds to 6–10% per year or 12–20% per influenza season. Also, a lag time of one to three weeks appears to be of significant importance in the relation between IPD and influenza. Conclusion This epidemiological study confirms the association between influenza and IPD. Furthermore, negative binomial regression models can be used to calculate number of excess cases of IPD, related to influenza.

  18. The Excess Liquidity of the Open Economy and its Management

    Directory of Open Access Journals (Sweden)

    Yonghong TU

    2011-01-01

    Full Text Available The excess liquidity of the open economy has become one main factor influencing the monetary markets, financial markets and even the whole macroeconomic. In era of the post-crisis, many countries have implemented the loose monetary policies, especially the quantitative easing policy in the U.S. which worsened the situation of the excess liquidity. Under this background, it will be more meaningful to study the excess liquidity of the open economy and its management for the developing countries’ economic recovery and development, inflation control, economic structural adjustment and optimization and the stability of the social economy. This paper starts by deep study of the related theories of the excess liquidity and the transmission mechanisms and then has an analysis on the current situation and cause of the excess liquidity in the BRICs which is taken as the representative for the developing countries. And then it comes up with the point that the main cause of the excess liquidity in the developing countries is the financial system, including loose monetary policies, financial innovation, petrodollar, East Asia dollar, US dollar hegemony, overcapacity, trade supply, savings supply and the surge of foreign exchange reserves etc. With the help of the Impulse response model from the VAR model, this paper analyzed on the impact of global liquidity surge to America, the euro zone, Japan, China, India, Russia, Brazil etc. and came to a conclusion: 1. The global excess liquidity keeps increasing. And its speed in the developing countries is fast while slow in the developed countries. 2. The spillover effect of the global excess liquidity spreads mainly through GDP and price. And for most countries, the international factor has more influence on the rising price than the domestic factor. Besides, the GDP has also been affected to fast grow which in turn becomes the main driving force of quantity increase of money. 3. The openness and development

  19. Breaking worse: The emergence of krokodil and excessive injuries among people who inject drugs in Eurasia

    OpenAIRE

    Grund, Jean-Paul; Latypov, Alisher; Harris, Magdalena

    2013-01-01

    textabstractBackground: Krokodil, a homemade injectable opioid, gained its moniker from the excessive harms asso- ciated with its use, such as ulcerations, amputations and discolored scale-like skin. While a relatively new phenomenon, krokodil use is prevalent in Russia and the Ukraine, with at least 100,000 and around 20,000 people respectively estimated to have injected the drug in 2011. In this paper we review the exist- ing information on the production and use of krokodil, within the con...

  20. Prenatal programming: adverse cardiac programming by gestational testosterone excess.

    Science.gov (United States)

    Vyas, Arpita K; Hoang, Vanessa; Padmanabhan, Vasantha; Gilbreath, Ebony; Mietelka, Kristy A

    2016-01-01

    Adverse events during the prenatal and early postnatal period of life are associated with development of cardiovascular disease in adulthood. Prenatal exposure to excess testosterone (T) in sheep induces adverse reproductive and metabolic programming leading to polycystic ovarian syndrome, insulin resistance and hypertension in the female offspring. We hypothesized that prenatal T excess disrupts insulin signaling in the cardiac left ventricle leading to adverse cardiac programming. Left ventricular tissues were obtained from 2-year-old female sheep treated prenatally with T or oil (control) from days 30-90 of gestation. Molecular markers of insulin signaling and cardiac hypertrophy were analyzed. Prenatal T excess increased the gene expression of molecular markers involved in insulin signaling and those associated with cardiac hypertrophy and stress including insulin receptor substrate-1 (IRS-1), phosphatidyl inositol-3 kinase (PI3K), Mammalian target of rapamycin complex 1 (mTORC1), nuclear factor of activated T cells -c3 (NFATc3), and brain natriuretic peptide (BNP) compared to controls. Furthermore, prenatal T excess increased the phosphorylation of PI3K, AKT and mTOR. Myocardial disarray (multifocal) and increase in cardiomyocyte diameter was evident on histological investigation in T-treated females. These findings support adverse left ventricular remodeling by prenatal T excess.

  1. Prenatal programming: adverse cardiac programming by gestational testosterone excess

    Science.gov (United States)

    Vyas, Arpita K.; Hoang, Vanessa; Padmanabhan, Vasantha; Gilbreath, Ebony; Mietelka, Kristy A.

    2016-01-01

    Adverse events during the prenatal and early postnatal period of life are associated with development of cardiovascular disease in adulthood. Prenatal exposure to excess testosterone (T) in sheep induces adverse reproductive and metabolic programming leading to polycystic ovarian syndrome, insulin resistance and hypertension in the female offspring. We hypothesized that prenatal T excess disrupts insulin signaling in the cardiac left ventricle leading to adverse cardiac programming. Left ventricular tissues were obtained from 2-year-old female sheep treated prenatally with T or oil (control) from days 30–90 of gestation. Molecular markers of insulin signaling and cardiac hypertrophy were analyzed. Prenatal T excess increased the gene expression of molecular markers involved in insulin signaling and those associated with cardiac hypertrophy and stress including insulin receptor substrate-1 (IRS-1), phosphatidyl inositol-3 kinase (PI3K), Mammalian target of rapamycin complex 1 (mTORC1), nuclear factor of activated T cells –c3 (NFATc3), and brain natriuretic peptide (BNP) compared to controls. Furthermore, prenatal T excess increased the phosphorylation of PI3K, AKT and mTOR. Myocardial disarray (multifocal) and increase in cardiomyocyte diameter was evident on histological investigation in T-treated females. These findings support adverse left ventricular remodeling by prenatal T excess. PMID:27328820

  2. On the excess of power in high resolution CMB experiments

    CERN Document Server

    Diego-Rodriguez, J M; Martinez-Gonalez, E; Silk, J

    2004-01-01

    We revisit the possibility that an excess in the CMB power spectrum at small angular scales (CBI, ACBAR) can be due to galaxy clusters (or compact sources in general). We perform a Gaussian analysis of ACBAR-like simulated data based on wavelets. We show how models with a significant excess should show a clear non-Gaussian signal in the wavelet space. In particular, a value of the normalization sigma_8 = 1 would imply a highly significant skewness and kurtosis in the wavelet coefficients at scales around 3 arcmin. Models with a more moderate excess also show a non-Gaussian signal in the simulated data. We conclude that current data (ACBAR) should show this signature if the excess is to be due to the SZ effect. Otherwise, the reason for that excess should be explained by some systematic effect. The significance of the non-Gaussian signal depends on the cluster model but it grows with the surveyed area. Non-Gaussianity test performed on incoming data sets should reveal the presence of a cluster population even ...

  3. Investigation of excess thyroid cancer incidence in Los Alamos County

    International Nuclear Information System (INIS)

    Los Alamos County (LAC) is home to the Los Alamos National Laboratory, a U.S. Department of Energy (DOE) nuclear research and design facility. In 1991, the DOE funded the New Mexico Department of Health to conduct a review of cancer incidence rates in LAC in response to citizen concerns over what was perceived as a large excess of brain tumors and a possible relationship to radiological contaminants from the Laboratory. The study found no unusual or alarming pattern in the incidence of brain cancer, however, a fourfold excess of thyroid cancer was observed during the late-1980's. A rapid review of the medical records for cases diagnosed between 1986 and 1990 failed to demonstrate that the thyroid cancer excess had resulted from enhanced detection. Surveillance activities subsequently undertaken to monitor the trend revealed that the excess persisted into 1993. A feasibility assessment of further studies was made, and ultimately, an investigation was conducted to document the epidemiologic characteristics of the excess in detail and to explore possible causes through a case-series records review. Findings from the investigation are the subject of this report

  4. Treating both wastewater and excess sludge with an innovative process

    Institute of Scientific and Technical Information of China (English)

    HE Sheng-bing; WANG Bao-zhen; WANG Lin; JIANG Yi-feng

    2003-01-01

    The innovative process consists of biological unit for wastewater treatment and ozonation unit for excess sludge treatment. An aerobic membrane bioreactor(MBR) was used to remove organics and nitrogen, and an anaerobic reactor was added to the biological unit for the release of phosphorus contained at aerobic sludge to enhance the removal of phosphorus. For the excess sludge produced in the MBR, which was fed to ozone contact column and reacted with ozone, then the ozonated sludge was returned to the MBR for further biological treatment. Experimental results showed that this process could remove organics, nitrogen and phosphorus efficiently, and the removals for COD, NH3-N, TN and TP were 93.17 %, 97.57 %, 82.77 % and 79.5 %, respectively. Batch test indicated that the specific nitrification rate and specific denitrification Under the test conditions, the sludge concentration in the MBR was kept at 5000-6000 mg/L, and the wasted sludge was ozonated at an ozone dosage of 0.10 kgO3/kgSS. During the experimental period of two months, no excess sludge was wasted, and a zero withdrawal of excess sludge was implemented. Through economic analysis, it was found that an additional ozonation operating cost for treatment of both wastewater and excess sludge was only 0.045 RMB Yuan(USD 0.0054)/m3 wastewater.

  5. Waterlike structural and excess entropy anomalies in liquid beryllium fluoride.

    Science.gov (United States)

    Agarwal, Manish; Chakravarty, Charusita

    2007-11-22

    The relationship between structural order metrics and the excess entropy is studied using the transferable rigid ion model (TRIM) of beryllium fluoride melt, which is known to display waterlike thermodynamic anomalies. The order map for liquid BeF2, plotted between translational and tetrahedral order metrics, shows a structurally anomalous regime, similar to that seen in water and silica melt, corresponding to a band of state points for which average tetrahedral (q(tet)) and translational (tau) order are strongly correlated. The tetrahedral order parameter distributions further substantiate the analogous structural properties of BeF2, SiO2, and H2O. A region of excess entropy anomaly can be defined within which the pair correlation contribution to the excess entropy (S2) shows an anomalous rise with isothermal compression. Within this region of anomalous entropy behavior, q(tet) and S2 display a strong negative correlation, indicating the connection between the thermodynamic and the structural anomalies. The existence of this region of excess entropy anomaly must play an important role in determining the existence of diffusional and mobility anomalies, given the excess entropy scaling of transport properties observed in many liquids. PMID:17963376

  6. Investigation of excess thyroid cancer incidence in Los Alamos County

    Energy Technology Data Exchange (ETDEWEB)

    Athas, W.F.

    1996-04-01

    Los Alamos County (LAC) is home to the Los Alamos National Laboratory, a U.S. Department of Energy (DOE) nuclear research and design facility. In 1991, the DOE funded the New Mexico Department of Health to conduct a review of cancer incidence rates in LAC in response to citizen concerns over what was perceived as a large excess of brain tumors and a possible relationship to radiological contaminants from the Laboratory. The study found no unusual or alarming pattern in the incidence of brain cancer, however, a fourfold excess of thyroid cancer was observed during the late-1980`s. A rapid review of the medical records for cases diagnosed between 1986 and 1990 failed to demonstrate that the thyroid cancer excess had resulted from enhanced detection. Surveillance activities subsequently undertaken to monitor the trend revealed that the excess persisted into 1993. A feasibility assessment of further studies was made, and ultimately, an investigation was conducted to document the epidemiologic characteristics of the excess in detail and to explore possible causes through a case-series records review. Findings from the investigation are the subject of this report.

  7. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...

  8. Associations Between Excessive Sodium Intake and Smoking and Alcohol Intake Among Korean Men: KNHANES V.

    Science.gov (United States)

    Choi, Kyung-Hwa; Park, Myung-Sook; Kim, Jung Ae; Lim, Ji-Ae

    2015-12-08

    In this study, we evaluated the associations of smoking and alcohol intake, both independently and collectively, with sodium intake in Korean men. Subjects (6340 men) were from the fifth Korean National Health Examination Survey (2010-2012). Smoking-related factors included smoking status, urinary cotinine level, and pack-years of smoking. Food intake was assessed using a 24-h recall. The odds of excessive sodium intake were estimated using survey logistic regression analysis. The smoking rate was 44.1%. The geometric mean of the urinary cotinine level was 0.05 µg/mL, and the median (min-max) pack-years of smoking was 13.2 (0-180). When adjusted for related factors, the odds (95% confidence interval) of excessive sodium intake were 1.54 (1.00, 2.37), 1.55 (1.23, 1.94), 1.44 (1.07, 1.95), and 1.37 (1.11, 1.68) times higher in the group exposed to smoking and drinking than in the group that never smoked nor drank, the group that never smoked and drank smoke and never drank, and the group that did not currently smoke or drink smoking and alcohol intake (p-interaction = 0.02). The results suggest that simultaneous exposure to smoking and alcohol intake is associated with increased odds of excessive sodium intake.

  9. Searching for IR excesses in Sun-like stars observed by WISE

    CERN Document Server

    de Miera, Fernando Cruz-Saenz; Bertone, Emanuele; Vega, Olga

    2013-01-01

    We present the results of a search of infrared excess candidates in a comprehensive (29\\,000 stars) magnitude limited sample of dwarf stars, spanning the spectral range F2-K0, and brighter than V$=$15 mag. We searched the sample within the {\\em WISE} all sky survey database for objects within 1 arcsecond of the coordinates provided by SIMBAD database and found over 9\\,000 sources detected in all {\\em WISE} bands. This latter sample excludes objects that are flagged as extended sources and those images which are affected by various optical artifacts. For each detected object, we compared the observed W4/W2 (22$\\mu$m/4.6$\\mu$m) flux ratio with the expected photospheric value and identified 197 excess candidates at 3$\\sigma$. For the vast majority of candidates, the results of this analysis represent the first reported evidence of an IR excess. Through the comparison with a simple black-body emission model, we derive estimates of the dust temperature, as well as of the dust fractional luminosities. For more than...

  10. BMI predicts emotion-driven impulsivity and cognitive inflexibility in adolescents with excess weight.

    Science.gov (United States)

    Delgado-Rico, Elena; Río-Valle, Jacqueline S; González-Jiménez, Emilio; Campoy, Cristina; Verdejo-García, Antonio

    2012-08-01

    Adolescent obesity is increasingly viewed as a brain-related dysfunction, whereby reward-driven urges for pleasurable foods "hijack" response selection systems, such that behavioral control progressively shifts from impulsivity to compulsivity. In this study, we aimed to examine the link between personality factors (sensitivity to reward (SR) and punishment (SP), BMI, and outcome measures of impulsivity vs. flexibility in--otherwise healthy--excessive weight adolescents. Sixty-three adolescents (aged 12-17) classified as obese (n = 26), overweight (n = 16), or normal weight (n = 21) participated in the study. We used psychometric assessments of the SR and SP motivational systems, impulsivity (using the UPPS-P scale), and neurocognitive measures with discriminant validity to dissociate inhibition vs. flexibility deficits (using the process-approach version of the Stroop test). We tested the relative contribution of age, SR/SP, and BMI on estimates of impulsivity and inhibition vs. switching performance using multistep hierarchical regression models. BMI significantly predicted elevations in emotion-driven impulsivity (positive and negative urgency) and inferior flexibility performance in adolescents with excess weight--exceeding the predictive capacity of SR and SP. SR was the main predictor of elevations in sensation seeking and lack of premeditation. These findings demonstrate that increases in BMI are specifically associated with elevations in emotion-driven impulsivity and cognitive inflexibility, supporting a dimensional path in which adolescents with excess weight increase their proneness to overindulge when under strong affective states, and their difficulties to switch or reverse habitual behavioral patterns. PMID:22421897

  11. Permanent demand excess as business strategy: an analysis of the Brazilian higher-education market

    Directory of Open Access Journals (Sweden)

    Rodrigo Menon Simões Moita

    2015-03-01

    Full Text Available Many Higher Education Institutions (HEIs establish tuition below the equilibrium price to generate permanent demand excess. This paper first adapts Becker’s (1991 theory to understand why the HEIs price in this way. The fact that students are both consumers and inputs on the education production process gives rise to a market equilibrium where some firms have excess demand and charge high prices, and others charge low prices and have empty seats.Second, the paper analyzes this equilibrium empirically. We estimated the demand for undergraduate courses in Business Administration in the State of São Paulo. The results show that tuition, quality of incoming students and percentage of lecturers holding doctorates degrees are the determining factors of students’ choice. Since the student quality determines the demand for a HEI, it is calculated what the value is for a HEI to get better students; that is the total revenue that each HEI gives up to guarantee excess demand. Regarding the “investment” in selectivity, 39 HEIs in São Paulo give up a combined R$ 5 million (or US$ 3.14 million in revenue per year per freshman class, which means 7.6% of the revenue coming from a freshman class.

  12. Determination of ethyl glucuronide in hair to assess excessive alcohol consumption in a student population.

    Science.gov (United States)

    Oppolzer, David; Barroso, Mário; Gallardo, Eugenia

    2016-03-01

    Hair analysis for ethyl glucuronide (EtG) was used to evaluate the pattern of alcohol consumption amongst the Portuguese university student population. A total of 975 samples were analysed. For data interpretation, the 2014 guidelines from the Society of Hair Testing (SoHT) for the use of alcohol markers in hair for the assessment of both abstinence and chronic excessive alcohol consumption were considered. EtG concentrations were significantly higher in the male population. The effect of hair products and cosmetics was evaluated by analysis of variance (ANOVA), and significant lower concentrations were obtained when conditioner or hair mask was used or when hair was dyed. Based on the analytical data and information obtained in the questionnaires from the participants, receiver operating characteristic (ROC) curves were constructed in order to determine the ideal cut-offs for our study population. Optimal cut-off values were estimated at 7.3 pg/mg for abstinence or rare occasional drinking control and 29.8 pg/mg for excessive consumption. These values are very close to the values suggested by the SoHT, proving their adequacy to the studied population. Overall, the obtained EtG concentrations demonstrate that participants are usually well aware of their consumption pattern, correlating with the self-reported consumed alcohol quantity, consumption habits and excessive consumption close to the time of hair sampling. PMID:26537927

  13. Financial Instability - a Result of Excess Liquidity or Credit Cycles?

    DEFF Research Database (Denmark)

    Heebøll-Christensen, Christian

    This paper compares the financial destabilizing effects of excess liquidity versus credit growth, in relation to house price bubbles and real economic booms. The analysis uses a cointegrated VAR model based on US data from 1987 to 2010, with a particulary focus on the period preceding the global...... financial crisis. Consistent with monetarist theory, the results suggest a stable money supply-demand relation in the period in question. However, the implied excess liquidity only resulted in financial destabilizing effect after year 2000. Meanwhile, the results also point to persistent cycles of real...... house prices and leverage, which appear to have been driven by real credit shocks, in accordance with post-Keynesian theories on financial instability. Importantly, however, these mechanisms of credit growth and excess liquidity are found to be closely related. In regards to the global financial crisis...

  14. Excess body weight during pregnancy and offspring obesity: potential mechanisms.

    Science.gov (United States)

    Paliy, Oleg; Piyathilake, Chandrika J; Kozyrskyj, Anita; Celep, Gulcin; Marotta, Francesco; Rastmanesh, Reza

    2014-03-01

    The rates of child and adult obesity have increased in most developed countries over the past several decades. The health consequences of obesity affect both physical and mental health, and the excess body weight can be linked to an elevated risk for developing type 2 diabetes, cardiovascular problems, and depression. Among the factors that can influence the development of obesity are higher infant weights and increased weight gain, which are associated with higher risk for excess body weight later in life. In turn, mother's excess body weight during and after pregnancy can be linked to the risk for offspring overweight and obesity through dietary habits, mode of delivery and feeding, breast milk composition, and through the influence on infant gut microbiota. This review considers current knowledge of these potential mechanisms that threaten to create an intergenerational cycle of obesity.

  15. Internet Addiction and Excessive Social Networks Use: What About Facebook?

    Science.gov (United States)

    Guedes, Eduardo; Sancassiani, Federica; Carta, Mauro Giovani; Campos, Carlos; Machado, Sergio; King, Anna Lucia Spear; Nardi, Antonio Egidio

    2016-01-01

    Facebook is notably the most widely known and used social network worldwide. It has been described as a valuable tool for leisure and communication between people all over the world. However, healthy and conscience Facebook use is contrasted by excessive use and lack of control, creating an addiction with severely impacts the everyday life of many users, mainly youths. If Facebook use seems to be related to the need to belong, affiliate with others and for self-presentation, the beginning of excessive Facebook use and addiction could be associated to reward and gratification mechanisms as well as some personality traits. Studies from several countries indicate different Facebook addiction prevalence rates, mainly due to the use of a wide-range of evaluation instruments and to the lack of a clear and valid definition of this construct. Further investigations are needed to establish if excessive Facebook use can be considered as a specific online addiction disorder or an Internet addiction subtype.

  16. Spitzer Surveys of IR Excesses of White Dwarfs

    CERN Document Server

    Chu, Y -H; Bilíkovà, J; Riddle, A; Su, K Y -L

    2010-01-01

    IR excesses of white dwarfs (WDs) can be used to diagnose the presence of low-mass companions, planets, and circumstellar dust. Using different combinations of wavelengths and WD temperatures, circumstellar dust at different radial distances can be surveyed. The Spitzer Space Telescope has been used to search for IR excesses of white dwarfs. Two types of circumstellar dust disks have been found: (1) small disks around cool WDs with Teff 100,000 K. The small dust disks are within the Roche limit, and are commonly accepted to have originated from tidally crushed asteroids. The large dust disks, at tens of AU from the central WDs, have been suggested to be produced by increased collisions among Kuiper Belt-like objects. In this paper, we discuss Spitzer IRAC surveys of small dust disks around cool WDs, a MIPS survey of large dust disks around hot WDs, and an archival Spitzer survey of IR excesses of WDs.

  17. Spitzer Surveys of Infrared Excesses of White Dwarfs

    CERN Document Server

    Chu, Y -H; Bilíkovà, J; Riddle, A; Su, K Y -L

    2010-01-01

    IR excesses of white dwarfs (WDs) can be used to diagnose the presence of low-mass companions, planets, and circumstellar dust. Using different combinations of wavelengths and WD temperatures, circumstellar dust at different radial distances can be surveyed. The Spitzer Space Telescope has been used to search for IR excesses of white dwarfs. Two types of circumstellar dust disks have been found: (1) small disks around cool WDs with T_eff 100,000 K. The small dust disks are within the Roche limit, and are commonly accepted to have originated from tidally crushed asteroids. The large dust disks, at tens of AU from the central WDs, have been suggested to be produced by increased collisions among Kuiper Belt-like objects. In this paper, we discuss Spitzer IRAC surveys of small dust disks around cool WDs, a MIPS survey of large dust disks around hot WDs, and an archival Spitzer survey of IR excesses of WDs.

  18. The 750 GeV diphoton excess and SUSY

    CERN Document Server

    Heinemeyer, S

    2016-01-01

    The LHC experiments ATLAS and CMS have reported an excess in the diphoton spectrum at \\sim 750 GeV. At the same time the motivation for Supersymmetry (SUSY) remains unbowed. Consequently, we review briefly the proposals to explain this excess in SUSY, focusing on "pure" (N)MSSM solutions. We then review in more detail a proposal to realize this excess within the NMSSM. In this particular scenario a Higgs boson with mass around 750 GeV decays to two light pseudo-scalar Higgs bosons. Via mixing with the pion these pseudo-scalars decay into a pair of highly collimated photons, which are identified as one photon, thus resulting in the observed signal.

  19. Classification of excessive domestic water consumption using Fuzzy Clustering Method

    Science.gov (United States)

    Zairi Zaidi, A.; Rasmani, Khairul A.

    2016-08-01

    Demand for clean and treated water is increasing all over the world. Therefore it is crucial to conserve water for better use and to avoid unnecessary, excessive consumption or wastage of this natural resource. Classification of excessive domestic water consumption is a difficult task due to the complexity in determining the amount of water usage per activity, especially as the data is known to vary between individuals. In this study, classification of excessive domestic water consumption is carried out using a well-known Fuzzy C-Means (FCM) clustering algorithm. Consumer data containing information on daily, weekly and monthly domestic water usage was employed for the purpose of classification. Using the same dataset, the result produced by the FCM clustering algorithm is compared with the result obtained from a statistical control chart. The finding of this study demonstrates the potential use of the FCM clustering algorithm for the classification of domestic consumer water consumption data.

  20. Internet Addiction and Excessive Social Networks Use: What About Facebook?

    Science.gov (United States)

    Guedes, Eduardo; Sancassiani, Federica; Carta, Mauro Giovani; Campos, Carlos; Machado, Sergio; King, Anna Lucia Spear; Nardi, Antonio Egidio

    2016-01-01

    Facebook is notably the most widely known and used social network worldwide. It has been described as a valuable tool for leisure and communication between people all over the world. However, healthy and conscience Facebook use is contrasted by excessive use and lack of control, creating an addiction with severely impacts the everyday life of many users, mainly youths. If Facebook use seems to be related to the need to belong, affiliate with others and for self-presentation, the beginning of excessive Facebook use and addiction could be associated to reward and gratification mechanisms as well as some personality traits. Studies from several countries indicate different Facebook addiction prevalence rates, mainly due to the use of a wide-range of evaluation instruments and to the lack of a clear and valid definition of this construct. Further investigations are needed to establish if excessive Facebook use can be considered as a specific online addiction disorder or an Internet addiction subtype. PMID:27418940

  1. Cause-specific excess mortality in siblings of patients co-infected with HIV and hepatitis C virus

    DEFF Research Database (Denmark)

    Hansen, Ann-Brit Eg; Lohse, Nicolai; Gerstoft, Jan;

    2007-01-01

    BACKGROUND: Co-infection with hepatitis C in HIV-infected individuals is associated with 3- to 4-fold higher mortality among these patients' siblings, compared with siblings of mono-infected HIV-patients or population controls. This indicates that risk factors shared by family members partially...... account for the excess mortality of HIV/HCV-co-infected patients. We aimed to explore the causes of death contributing to the excess sibling mortality. METHODOLOGY AND PRINCIPAL FINDINGS: We retrieved causes of death from the Danish National Registry of Deaths and estimated cause-specific excess mortality...... rates (EMR) for siblings of HIV/HCV-co-infected individuals (n = 436) and siblings of HIV mono-infected individuals (n = 1837) compared with siblings of population controls (n = 281,221). Siblings of HIV/HCV-co-infected individuals had an all-cause EMR of 3.03 (95% CI, 1.56-4.50) per 1,000 person...

  2. A Multi-Scale Approach to Directional Field Estimation

    OpenAIRE

    Bazen, Asker M.; Bouman, Niek J.; Veldhuis, Raymond N. J.

    2004-01-01

    This paper proposes a robust method for directional field estimation from fingerprint images that combines estimates at multiple scales. The method is able to provide accurate estimates in scratchy regions, while at the same time maintaining correct estimates around singular points. Compared to other methods, the penalty for detecting false singular points is much smaller, because this does not deteriorate the directional field estimate.

  3. Origin of the Lyman excess in early-type stars

    Science.gov (United States)

    Cesaroni, R.; Sánchez-Monge, Á.; Beltrán, M. T.; Molinari, S.; Olmi, L.; Treviño-Morales, S. P.

    2016-04-01

    Context. Ionized regions around early-type stars are believed to be well-known objects, but until recently, our knowledge of the relation between the free-free radio emission and the IR emission has been observationally hindered by the limited angular resolution in the far-IR. The advent of Herschel has now made it possible to obtain a more precise comparison between the two regimes, and it has been found that about a third of the young H ii regions emit more Lyman continuum photons than expected, thus presenting a Lyman excess. Aims: With the present study we wish to distinguish between two scenarios that have been proposed to explain the existence of the Lyman excess: (i) underestimation of the bolometric luminosity, or (ii) additional emission of Lyman-continuum photons from an accretion shock. Methods: We observed an outflow (SiO) and an infall (HCO+) tracer toward a complete sample of 200 H ii regions, 67 of which present the Lyman excess. Our goal was to search for any systematic difference between sources with Lyman excess and those without. Results: While the outflow tracer does not reveal any significant difference between the two subsamples of H ii regions, the infall tracer indicates that the Lyman-excess sources are more associated with infall signposts than the other objects. Conclusions: Our findings indicate that the most plausible explanation for the Lyman excess is that in addition to the Lyman continuum emission from the early-type star, UV photons are emitted from accretion shocks in the stellar neighborhood. This result suggests that high-mass stars and/or stellar clusters containing young massive stars may continue to accrete for a long time, even after the development of a compact H ii region. Based on observations carried out with the IRAM 30 m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).

  4. Excessive recreational computer use and food consumption behaviour among adolescents

    Directory of Open Access Journals (Sweden)

    Mao Yuping

    2010-08-01

    Full Text Available Abstract Introduction Using the 2005 California Health Interview Survey (CHIS data, we explore the association between excessive recreational computer use and specific food consumption behavior among California's adolescents aged 12-17. Method The adolescent component of CHIS 2005 measured the respondents' average number of hours spent on viewing TV on a weekday, the average number of hours spent on viewing TV on a weekend day, the average number of hours spent on playing with a computer on a weekday, and the average number of hours spent on playing with computers on a weekend day. We recode these four continuous variables into four variables of "excessive media use," and define more than three hours of using a medium per day as "excessive." These four variables are then used in logistic regressions to predict different food consumption behaviors on the previous day: having fast food, eating sugary food more than once, drinking sugary drinks more than once, and eating more than five servings of fruits and vegetables. We use the following variables as covariates in the logistic regressions: age, gender, race/ethnicity, parental education, household poverty status, whether born in the U.S., and whether living with two parents. Results Having fast food on the previous day is associated with excessive weekday TV viewing (O.R. = 1.38, p Conclusion Excessive recreational computer use independently predicts undesirable eating behaviors that could lead to overweight and obesity. Preventive measures ranging from parental/youth counseling to content regulations might be addressing the potential undesirable influence from excessive computer use on eating behaviors among children and adolescents.

  5. Modeling regional and gender impacts of the 2003 summer heatwave in excessive mortality in Portugal

    Science.gov (United States)

    Ramos, Alexandre M.; Trigo, Ricardo M.; Nogueira, Paulo J.; Santos, Filipe D.; Garcia-Herrera, Ricardo; Gouveia, Célia; Santo, Fátima E.

    2010-05-01

    This work evaluates the impact of the 2003 European heatwave on excessive human mortality in Portugal, a country that presents a relatively high level of exposure to heatwave events. To estimate the fortnight expected mortality per district between 30 July and 15 August we have used five distinct baseline periods of mortality. We have opted to use the period that spans between 2000 and 2004, as it corresponds to a good compromise between a relatively long period (to guarantee some stability) and a sufficiently short period (to guarantee the similarity of the underlying population structure). Our findings show a total of 2399 excessive deaths are estimated in continental Portugal, which implies an increase of 58% over the expected deaths for those two weeks. When these values are split by gender, it is seen that women increase (79%), was considerably higher than that recorded for men (41%). The increment of mortality due to this heatwave was detected for all the 18 districts of the country, but its magnitude was significantly higher in the inner districts close to the Spanish border. When we split the regional impact by gender all districts reveal significant mortality increments for women, while the impact in men's excess deaths is not significant over 3 districts. Several temperature derived indices were used and evaluated in their capacity to explain, at the regional level, the excessive mortality (ratio between observed and expected deaths) by gender. The best relationship was found for the total exceedance of extreme days, an index combining the length of the heatwave and its intensity. Both variables hold a linear relationship with r = 0.79 for women and a poorer adjustment (r = 0.50) for men. Additionally, availability of mortality data split by age also allowed obtaining detailed information on the structure of the population in risk, namely by showing that statistically significant increments are concentrated in the last three age classes (45-64, 65-74 and

  6. Estimating Influenza Hospitalizations among Children

    OpenAIRE

    Grijalva, Carlos G.; Craig, Allen S.; DUPONT, William D.; Bridges, Carolyn B.; Schrag, Stephanie J.; Iwane, Marika K.; Schaffner, William; Edwards, Kathryn M.; Griffin, Marie R.

    2006-01-01

    Although influenza causes more hospitalizations and deaths among American children than any other vaccine-preventable disease, deriving accurate population-based estimates of disease impact is challenging. Using 2 independent surveillance systems, we performed a capture-recapture analysis to estimate influenza-associated hospitalizations in children in Davidson County, Tennessee, during the 2003–2004 influenza season. The New Vaccine Surveillance Network (NVSN) enrolled children hospitalized ...

  7. Interpreting 750 GeV diphoton excess in plain NMSSM

    Science.gov (United States)

    Badziak, Marcin; Olechowski, Marek; Pokorski, Stefan; Sakurai, Kazuki

    2016-09-01

    NMSSM has enough ingredients to explain the diphoton excess at 750 GeV: singlet-like (pseudo) scalar (a) s and higgsinos as heavy vector-like fermions. We consider the production of the 750 GeV singlet-like pseudo scalar a from a decay of the doublet-like pseudo scalar A, and the subsequent decay of a into two photons via higgsino loop. We demonstrate that this cascade decay of the NMSSM Higgs bosons can explain the diphoton excess at 750 GeV.

  8. Enhancing excess sludge aerobic digestion with low intensity ultrasound

    Institute of Scientific and Technical Information of China (English)

    DING Wen-chuan; LI Dong-xue; ZENG Xiao-lan; LONG Teng-rui

    2006-01-01

    In order to enhance the efficiency of aerobic digestion, the excess sludge was irradiated by low intensity ultrasound at a frequency of 28 kHz and acoustic intensity of 0.53 W/cm2. The results show that the sludge stabilization without ultrasonic treatment can be achieved after 17 d of digestion, whereas the digestion time of ultrasonic groups can be cut by 3 - 7 d. During the same digestion elapsing, in ultrasonic groups the total volatile suspended solid removal rate is higher than that in the control group. The kinetics of aerobic digestion of excess sludge with ultrasound can also be described with first-order reaction.

  9. Prevalence of excessive screen time and associated factors in adolescents

    OpenAIRE

    Joana Marcela Sales de Lucena; Luanna Alexandra Cheng; Thaísa Leite Mafaldo Cavalcante; Vanessa Araújo da Silva; José Cazuza de Farias Júnior

    2015-01-01

    Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female) from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyze...

  10. Solving Excess Water Production Problems in Productive Formation

    Directory of Open Access Journals (Sweden)

    Kozyrev Ilya

    2016-01-01

    Full Text Available One of the important developments of the Russian Federation national economy is a petroleum resource. Water shut off techniques are used in the oilfields to avoid the massive water production. We describe a technology for solving excess water production problems focusing on the new gel-based fluid which can be effectively applied for water shutoff. We study the effect of the gel-based fluid solution experimentally to show the feasibility of its treatment the in the near wellbore region to solve the excess water production problem.

  11. ATLAS on-Z Excess Through Vector-Like Quarks

    CERN Document Server

    Endo, Motoi

    2016-01-01

    We investigate the possibility that the excess observed in the leptonic-$Z +$jets $+\\slashed{E}_T$ ATLAS SUSY search is due to pair productions of a vector-like quark $U$ decaying to the first-generation quarks and $Z$ boson. We find that the excess can be explained within the 2$\\sigma$ (up to 1.4$\\sigma$) level while evading the constraints from the other LHC searches. The preferred range of the mass and branching ratio are $610 0.3$-$0.45$, respectively.

  12. Cholesterol homeostasis: How do cells sense sterol excess?

    Science.gov (United States)

    Howe, Vicky; Sharpe, Laura J; Alexopoulos, Stephanie J; Kunze, Sarah V; Chua, Ngee Kiat; Li, Dianfan; Brown, Andrew J

    2016-09-01

    Cholesterol is vital in mammals, but toxic in excess. Consequently, elaborate molecular mechanisms have evolved to maintain this sterol within narrow limits. How cells sense excess cholesterol is an intriguing area of research. Cells sense cholesterol, and other related sterols such as oxysterols or cholesterol synthesis intermediates, and respond to changing levels through several elegant mechanisms of feedback regulation. Cholesterol sensing involves both direct binding of sterols to the homeostatic machinery located in the endoplasmic reticulum (ER), and indirect effects elicited by sterol-dependent alteration of the physical properties of membranes. Here, we examine the mechanisms employed by cells to maintain cholesterol homeostasis. PMID:26993747

  13. Development of star tracker system for accurate estimation of spacecraft attitude

    OpenAIRE

    Tappe, Jack A.

    2009-01-01

    Approved for public release, distribution unlimited This thesis researches different star pattern recognition and attitude determination algorithms for a three-axis rotational spacecraft. A simulated star field will be suspended above the experimental Three-Axis Spacecraft simulator to proved a reference for the star-pattern recognition algorithms. A star field inertial reference frame database of stars will be developed with the simulator at zero attitude. The angle, planar triangle an...

  14. Does consideration of larger study areas yield more accurate estimates of air pollution health effects?

    DEFF Research Database (Denmark)

    Pedersen, Marie; Siroux, Valérie; Pin, Isabelle;

    2013-01-01

    were performed to identify the best strategy to limit confounding by unmeasured factors varying with area type. We examined the relation between modeled concentrations and respiratory health in infants using regression models with and without adjustment or interaction terms with area type. RESULTS......: Simulations indicated that adjustment for area limited the bias due to unmeasured confounders varying with area at the costs of a slight decrease in statistical power. In our cohort, rural and urban areas differed for air pollution levels and for many factors associated with respiratory health and exposure...

  15. How to accurately estimate BH masses of AGN with double-peaked emission lines

    Directory of Open Access Journals (Sweden)

    Xue Guang Zhang

    2008-01-01

    Full Text Available Presentamos una nueva relación para determinar la masa virial del Agujero Negro central en Núcleos Activos de Galaxias con perfiles de doble pico en las líneas anchas de baja ionización. Se discute cuál es el parámetro adecuado para estimar la velocidad local de las regiones de emisión y la relación para estimar la distancia de éstas regiones a la fuente de ionización. Seleccionamos 17 objetos con perfiles de doble pico del SDSS y con líneas de absorción medibles para determinar las masas del hoyo negro mediante el método de dispersión de velocidades y comparar con nuestra determinación de masas viriales. Confirmamos un resultado previo (Zhang, Dultzin-Hacyan, & Wang 2007: que las relaciones para BLRs "normales" no son adecuadas para determinar las masas por el método virial en el caso de líneas anchas con doble pico.

  16. Eddy covariance observations of methane and nitrous oxide emissions: Towards more accurate estimates from ecosystems

    NARCIS (Netherlands)

    Kroon-van Loon, P.S.

    2010-01-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these e

  17. Interbank Market Structure and Accurate Estimation of an Aggregate Liquidity Shock

    OpenAIRE

    Isakov, A.

    2013-01-01

    It's customary among money market analysts to blame interest rate deviations from the Bank of Russia's target band on the market structure imperfections or segmentation. We isolate one form of such market imperfection and provide an illustration of its potential impact on central bank's open market operations efficiency in the current monetary policy framework. We then hypothesize that naive (market) structure-agnostic liquidity gap aggregation will lead to market demand underestimation in so...

  18. Estimating tropical forest biomass more accurately by integrating ALOS PALSAR and Landsat - 7 ETM+ data

    NARCIS (Netherlands)

    Basuki, T.M.; Skidmore, A.K.; Hussin, Y.A.; Duren, van I.C.

    2013-01-01

    Integration of multisensor data provides the opportunity to explore benefits emanating from different data sources. A fusion between fraction images derived from spectral mixture analysis of Landsat-7 ETM+ and phased array L-band synthetic aperture radar (PALSAR) is introduced. The aim of this fusio

  19. SCREENING TO IDENTIFY AND PREVENT URBAN STORM WATER PROBLEMS: ESTIMATING IMPERVIOUS AREA ACCURATELY AND INEXPENSIVELY

    Science.gov (United States)

    Complete identification and eventual prevention of urban water quality problems pose significant monitoring, "smart growth" and water quality management challenges. Uncontrolled increase of impervious surface area (roads, buildings, and parking lots) causes detrimental hydrologi...

  20. Excess cardiovascular mortality associated with cold spells in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Kyncl Jan

    2009-01-01

    Full Text Available Abstract Background The association between cardiovascular mortality and winter cold spells was evaluated in the population of the Czech Republic over 21-yr period 1986–2006. No comprehensive study on cold-related mortality in central Europe has been carried out despite the fact that cold air invasions are more frequent and severe in this region than in western and southern Europe. Methods Cold spells were defined as periods of days on which air temperature does not exceed -3.5°C. Days on which mortality was affected by epidemics of influenza/acute respiratory infections were identified and omitted from the analysis. Excess cardiovascular mortality was determined after the long-term changes and the seasonal cycle in mortality had been removed. Excess mortality during and after cold spells was examined in individual age groups and genders. Results Cold spells were associated with positive mean excess cardiovascular mortality in all age groups (25–59, 60–69, 70–79 and 80+ years and in both men and women. The relative mortality effects were most pronounced and most direct in middle-aged men (25–59 years, which contrasts with majority of studies on cold-related mortality in other regions. The estimated excess mortality during the severe cold spells in January 1987 (+274 cardiovascular deaths is comparable to that attributed to the most severe heat wave in this region in 1994. Conclusion The results show that cold stress has a considerable impact on mortality in central Europe, representing a public health threat of an importance similar to heat waves. The elevated mortality risks in men aged 25–59 years may be related to occupational exposure of large numbers of men working outdoors in winter. Early warnings and preventive measures based on weather forecast and targeted on the susceptible parts of the population may help mitigate the effects of cold spells and save lives.