WorldWideScience

Sample records for accurately estimate excess

  1. Accurate pose estimation for forensic identification

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  2. Accurate estimation of indoor travel times

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...

  3. Excess functions and estimation of the extreme-value index

    Beirlant, Jan; Vynckier, Petra; Teugels, Josef L.

    1996-01-01

    A general class of estimators of the extreme-value index is generated using estimates of mean, median and trimmed excess functions. Special cases yield earlier proposals in the literature, such as Pickands' (1975) estimator. A particular restatement of the mean excess function yields an estimator which can be derived from the slope at the right upper tail from a generalized quantile plot. From this viewpoint algorithms can be constructed to search for the number of extremes needed to minimize...

  4. Accurate estimator of correlations between asynchronous signals

    Toth, Bence; Kertesz, Janos

    2008-01-01

    The estimation of the correlation between time series is often hampered by the asynchronicity of the signals. Cumulating data within a time window suppresses this source of noise but weakens the statistics. We present a method to estimate correlations without applying long time windows. We decompose the correlations of data cumulated over a long window using decay of lagged correlations as calculated from short window data. This increases the accuracy of the estimated correlation significantl...

  5. Accurate hydrocarbon estimates attained with radioactive isotope

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  6. Star Position Estimation Improvements for Accurate Star Tracker Attitude Estimation

    Delabie, Tjorven

    2015-01-01

    This paper presents several methods to improve the estimation of the star positions in a star tracker, using a Kalman Filter. The accuracy with which the star positions can be estimated greatly influences the accuracy of the star tracker attitude estimate. In this paper, a Kalman Filter with low computational complexity, that can be used to estimate the star positions based on star tracker centroiding data and gyroscope data is discussed. The performance of this Kalman Filter can be increased...

  7. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  8. Accurate Parameter Estimation for Unbalanced Three-Phase System

    Yuan Chen; Hing Cheung So

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...

  9. Accurate quantum state estimation via "Keeping the experimentalist honest"

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  10. Efficient and Accurate Robustness Estimation for Large Complex Networks

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  11. Accurate pose estimation using single marker single camera calibration system

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  12. Evaluation of accurate eye corner detection methods for gaze estimation

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  13. Fast and accurate estimation for astrophysical problems in large databases

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  14. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  15. Accurate location estimation of moving object In Wireless Sensor network

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  16. How utilities can achieve more accurate decommissioning cost estimates

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  17. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  18. Using inpainting to construct accurate cut-sky CMB estimators

    Gruetjen, H F; Liguori, M; Shellard, E P S

    2015-01-01

    The direct evaluation of manifestly optimal, cut-sky CMB power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodised pseudo-$C_l$ (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodisation and a novel low-l leaning scheme. Providing an analytic argument why the loca...

  19. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  20. Efficient and Accurate Path Cost Estimation Using Trajectory Data

    Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.

    2015-01-01

    Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...

  1. Accurate walking and running speed estimation using wrist inertial data.

    Bertschi, M; Celka, P; Delgado-Gonzalo, R; Lemay, M; Calvo, E M; Grossenbacher, O; Renevey, Ph

    2015-08-01

    In this work, we present an accelerometry-based device for robust running speed estimation integrated into a watch-like device. The estimation is based on inertial data processing, which consists in applying a leg-and-arm dynamic motion model to 3D accelerometer signals. This motion model requires a calibration procedure that can be done either on a known distance or on a constant speed period. The protocol includes walking and running speeds between 1.8km/h and 19.8km/h. Preliminary results based on eleven subjects are characterized by unbiased estimations with 2(nd) and 3(rd) quartiles of the relative error dispersion in the interval ±5%. These results are comparable to accuracies obtained with classical foot pod devices. PMID:26738169

  2. Accurate determination of phase arrival times using autoregressive likelihood estimation

    G. Kvaerna

    1994-01-01

    We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation) of about 0.05 s for P-phases ...

  3. Techniques of HRV accurate estimation using a photoplethysmographic sensor

    Álvarez Gómez, Laura

    2015-01-01

    The student will obtain a database measuring in at least 20 volunteers 50 minutes of two channel ECG, distal pulse measured at the finger and breathing while listening to music. After measurements, the student will develop algorithms to ascertain which is the proper processing of the pulse signal to estimate the heart rate variability when compared to that obtained with the ECG. Breathing effect on errors will be assessed In order to facilitate the study of the heart rate variability (HRV)...

  4. Accurate tempo estimation based on harmonic + noise decomposition

    Bertrand David

    2007-01-01

    Full Text Available We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  5. Accurate determination of phase arrival times using autoregressive likelihood estimation

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  6. Reconciliation of excess 14C-constrained global CO2 piston velocity estimates

    Naegler, Tobias

    2011-01-01

    Oceanic excess radiocarbon data is widely used as a constraint for air–sea gas exchange. However, recent estimates of the global mean piston velocity 〈k〉 from Naegler et al., Krakauer et al., Sweeney et al. and Müller et al. differ substantially despite the fact that they all are based on excess radiocarbon data from the GLODAP data base. Here I show that these estimates of 〈k〉 can be reconciled if first, the changing oceanic radiocarbon inventory due to net uptake of CO2 is taken into accoun...

  7. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  8. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  9. Reconciliation of excess 14C-constrained global CO2 piston velocity estimates

    Naegler, Tobias

    2009-04-01

    Oceanic excess radiocarbon data is widely used as a constraint for air-sea gas exchange. However, recent estimates of the global mean piston velocity from Naegler et al., Krakauer et al., Sweeney et al. and Müller et al. differ substantially despite the fact that they all are based on excess radiocarbon data from the GLODAP data base. Here I show that these estimates of can be reconciled if first, the changing oceanic radiocarbon inventory due to net uptake of CO2 is taken into account; second, if realistic reconstructions of sea surface Δ14C are used and third, if is consistently reported with or without normalization to a Schmidt number of 660. These corrections applied, unnormalized estimates of from these studies range between 15.1 and 18.2cmh-1. However, none of these estimates can be regarded as the only correct value for . I thus propose to use the `average' of the corrected values of presented here (16.5+/-3.2cmh-1) as the best available estimate of the global mean unnormalized piston velocity , resulting in a gross ocean-to-atmosphere CO2 flux of 76 +/- 15PgCyr-1 for the mid-1990s.

  10. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Excess Risk Estimates for Public Highway-Rail Grade Crossings G Appendix G to Part 222 Transportation Other Regulations Relating to Transportation... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for...

  11. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    Hossain Md Monir

    2008-12-01

    Full Text Available Abstract Background Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. Methods and Results U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories ( Conclusion Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.

  12. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Saeed Sepasi; Leon R. Roose; Marc M. Matsuura

    2015-01-01

    As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs), hybrid electric vehicles (HEVs) and smart grids. In these applications, the battery management system (BMS) requires an accurate online estimation of the state of charge (SOC) in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes...

  13. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  14. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696

  15. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.103 kg ha-1 yr-1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.

  16. Simple, Fast and Accurate Photometric Estimation of Specific Star Formation Rate

    Stensbo-Smidt, Kristoffer; Igel, Christian; Zirm, Andrew; Pedersen, Kim Steenstrup

    2015-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study considers the task of estimating the specific star formation rate (sSFR) of galaxies. It is shown that a nearest neighbours algorithm can produce better sSFR estimates than traditional SED fitting. We show that we can obtain accurate estimates of the sSFR even at high redshifts using only broad-band photometry based on the u, g, r, i and z filters from Sloan Digital Sky Survey (SDSS). We addtional...

  17. ACCURATE LOCATION ESTIMATION OF MOVING OBJECT WITH ENERGY CONSTRAINT & ADAPTIVE UPDATE ALGORITHMS TO SAVE DATA

    Vijay Bhaskar Semwal

    2011-08-01

    Full Text Available In research paper “Accurate estimation of the target location of object with energy constraint &Adaptive Update Algorithms to Save Data” one of the central issues in sensor networks is track thelocation, of moving object which have overhead of saving data, an accurate estimation of the targetlocation of object with energy constraint .We do not have any mechanism which control and maintaindata .The wireless communication bandwidth is also very limited. Some field which is using thistechnique are flood and typhoon detection, forest fire detection, temperature and humidity and ones wehave these information use these information back to a central air conditioning and ventilation system.In this research paper, we propose protocol based on the prediction and adaptive basedalgorithm which is using less sensor node reduced by an accurate estimation of the target location. weare using minimum three sensor node to get the accurate position .We can extend it upto four or five tofind more accurate location but we have energy constraint so we are using three with accurateestimation of location help us to reduce sensor node..We show that our tracking method performs well interms of energy saving regardless of mobility pattern of the mobile target .We extends the life time ofnetwork with less sensor node. Once a new object is detected, a mobile agent will be initiated to track theroaming path of the object. The agent is mobile since it will choose the sensor closest to the object tostay. The agent may invite some nearby slave sensors to cooperatively position the object and inhibitother irrelevant (i.e., farther sensors from tracking the object. As a result, the communication andsensing overheads are greatly reduced.

  18. Accurate DOA Estimations Using Microstrip Adaptive Arrays in the Presence of Mutual Coupling Effect

    Qiulin Huang

    2013-01-01

    Full Text Available A new mutual coupling calibration method is proposed for adaptive antenna arrays and is employed in the DOA estimations to calibrate the received signals. The new method is developed via the transformation between the embedded element patterns and the isolated element patterns. The new method is characterized by the wide adaptability of element structures such as dipole arrays and microstrip arrays. Additionally, the new method is suitable not only for the linear polarization but also for the circular polarization. It is shown that accurate calibration of the mutual coupling can be obtained for the incident signals in the 3 dB beam width and the wider angle range, and, consequently, accurate [1D] and [2D] DOA estimations can be obtained. Effectiveness of the new calibration method is verified by a linearly polarized microstrip ULA, a circularly polarized microstrip ULA, and a circularly polarized microstrip UCA.

  19. Accurate state and parameter estimation in nonlinear systems with sparse observations

    Rey, Daniel; Eldridge, Michael; Kostuk, Mark [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Abarbanel, Henry D.I., E-mail: habarbanel@ucsd.edu [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Marine Physical Laboratory, Scripps Institution of Oceanography, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Schumann-Bischoff, Jan [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany); Parlitz, Ulrich, E-mail: ulrich.parlitz@ds.mpg.de [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany)

    2014-02-01

    Transferring information from observations to models of complex systems may meet impediments when the number of observations at any observation time is not sufficient. This is especially so when chaotic behavior is expressed. We show how to use time-delay embedding, familiar from nonlinear dynamics, to provide the information required to obtain accurate state and parameter estimates. Good estimates of parameters and unobserved states are necessary for good predictions of the future state of a model system. This method may be critical in allowing the understanding of prediction in complex systems as varied as nervous systems and weather prediction where insufficient measurements are typical.

  20. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  1. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  2. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that they...... employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...... that are aimed at solving this problem. These esti- mators are based on the principles of nonlinear least-squares, harmonic fitting, optimal filtering, subspace orthogonality, and shift-invariance, and they all reduce to already published methods for a high number of observations. In experiments, the...

  3. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  4. An Accurate Method for the BDS Receiver DCB Estimation in a Regional Network

    LI Xin

    2016-08-01

    Full Text Available An accurate approach for receiver differential code biases (DCB estimation is proposed with the BDS data obtained from a regional tracking network. In contrast to the conventional methods for BDS receiver DCB estimation, the proposed method does not require a complicated ionosphere model, as long as one reference station receiver DCB is known. The main idea for this method is that the ionosphere delay is highly dependent on the geometric ranges between the BDS satellite and the receiver normally. Therefore, the non-reference station receivers DCBs in this regional area can be estimated using single difference (SD with reference stations. The numerical results show that the RMS of these estimated BDS receivers DCBs errors over 30 days are about 0.3 ns. Additionally, after deduction of these estimated receivers DCBs and knowing satellites DCBs, the extractive diurnal VTEC showed a good agreement with the diurnal VTEC gained from the GIM interpolation, indicating the reliability of the estimated receivers DCBs.

  5. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Saeed Sepasi

    2015-06-01

    Full Text Available As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs, hybrid electric vehicles (HEVs and smart grids. In these applications, the battery management system (BMS requires an accurate online estimation of the state of charge (SOC in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes SOC estimation of Li-ion battery packs using a fuzzy-improved extended Kalman filter (fuzzy-IEKF for Li-ion cells, regardless of their age. The proposed approach introduces a fuzzy method with a new class and associated membership function that determines an approximate initial value applied to SOC estimation. Subsequently, the EKF method is used by considering the single unit model for the battery pack to estimate the SOC for following periods of battery use. This approach uses an adaptive model algorithm to update the model for each single cell in the battery pack. To verify the accuracy of the estimation method, tests are done on a LiFePO4 aged battery pack consisting of 120 cells connected in series with a nominal voltage of 432 V.

  6. Estimate of lifetime excess lung cancer risk due to indoor exposure to natural radon-222 daughters in Korea

    Lifetime excess lung cancer risk due to indoor 222Rn daughters exposure in Korea was quantitatively estimated by a modified relative risk projection model proposed by the U.S. National Academy of Science and the recent Korean life table data. The lifetime excess risk of lung cancer death attributable to annual constant exposure to Korean indoor radon daughters was estimated to be about 230/106 per WLM, which seemed to be nearly in the median of the range of 150-450/106 per WLM reported by the UNSCEAR in 1988. (1 fig., 2 tabs.)

  7. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  10. Shear-wave elastography contributes to accurate tumour size estimation when assessing small breast cancers

    Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation

  11. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  12. Accurate satellite-derived estimates of the tropospheric ozone impact on the global radiation budget

    J. Joiner

    2009-07-01

    Full Text Available Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the radiative effect of tropospheric O3 for January and July 2005. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our derived radiative effect reflects the unadjusted (instantaneous effect of the total tropospheric O3 rather than the anthropogenic component. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS Aura Ozone Monitoring Instrument (OMI and Microwave Limb Sounder (MLS by incorporating cloud pressure information from OMI. We focus specifically on the magnitude and spatial structure of the cloud effect on both the short- and long-wave radiative budget. The estimates presented here can be used to evaluate the various aspects of model-generated radiative forcing. For example, our derived cloud impact is to reduce the radiative effect of tropospheric ozone by ~16%. This is centered within the published range of model-produced cloud effect on unadjusted ozone radiative forcing.

  13. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  14. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  15. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  16. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  17. Accurate estimation of motion blur parameters in noisy remote sensing image

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  18. Accurate estimation of electromagnetic parameters using FEA for Indus-2 RF cavity

    The 2.5 GeV INDUS-2 SRS has four normal conducting bell shaped RF cavities operating at 505.8122 MHz fundamental frequency and peak cavity voltage of 650 kV. The RF frequency and other parameters of the cavity need to be estimated accurately for beam dynamics and cavity electromagnetic studies at fundamental as well as higher frequencies (HOMs). The 2D axis-symmetric result of the fundamental frequency with fine discretization using SUPERFISH code has shown a difference of ∼2.5 MHz from the designed frequency, which leads to inaccurate estimation of electromagnetic parameters. Therefore, for accuracy, a complete 3D model comprising of all ports needs to be considered in the RF domain with correct boundary conditions. All ports in the cavity were modeled using FEA tool ANSYS. The Eigen mode simulation of complete cavity model by considering various ports is used for estimation of parameters. A mesh convergence test for these models is performed. The methodologies adopted for calculating different electromagnetic parameters using small macros are described. Various parameters that affect the cavity frequency are discussed in this paper. A comparison of FEA (ANSYS) results is done with available experimental measurements. (author)

  19. Accurate estimation of the RMS emittance from single current amplifier data

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H- ion source

  20. Validation of a wrist monitor for accurate estimation of RR intervals during sleep.

    Renevey, Ph; Sola, J; Theurillat, P; Bertschi, M; Krauss, J; Andries, D; Sartori, C

    2013-01-01

    While the incidence of sleep disorders is continuously increasing in western societies, there is a clear demand for technologies to asses sleep-related parameters in ambulatory scenarios. The present study introduces a novel concept of accurate sensor to measure RR intervals via the analysis of photo-plethysmographic signals recorded at the wrist. In a cohort of 26 subjects undergoing full night polysomnography, the wrist device provided RR interval estimates in agreement with RR intervals as measured from standard electrocardiographic time series. The study showed an overall agreement between both approaches of 0.05 ± 18 ms. The novel wrist sensor opens the door towards a new generation of comfortable and easy-to-use sleep monitors. PMID:24110980

  1. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  2. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  3. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  4. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  5. Estimation of energy conservation benefits in excess air controlled gas-fired systems

    Bahadori, Alireza; Vuthaluru, Hari B. [School of Chemical and Petroleum Engineering, Curtin University of Technology, GPO Box 1987, Perth, Western Australia 6845 (Australia)

    2010-10-15

    The most significant energy consumers in energy related industries are boilers and other gas-fired systems. Combustion efficiency term commonly used for boilers and other fired systems and the information on either carbon dioxide (CO{sub 2}) or oxygen (O{sub 2}) in the exhaust gas can be used. The aim of this study is to develop a simple-to-use predictive tool which is easier than the existing approaches less complicated with fewer computations and suitable for combustion engineers for predicting the natural gas combustion efficiency as a function of excess air fraction and stack temperature rise (the difference between the flue gas temperature and the combustion air inlet temperature). The results of the proposed predictive tool can be used in follow-up calculations to determine relative operating efficiency and to establish energy conservation benefits for an excess air control program. Results show that the proposed predictive tool has a very good agreement with the reported data where the average absolute deviation percent is 0.1%. It should be noted that these calculations are based on assuming complete natural gas combustion at atmospheric pressure and the level of unburned combustibles is considered negligible. The proposed method is superior owing to its accuracy and clear numerical background, wherein the relevant coefficients can be retuned quickly for various cases. This proposed simple-to-use approach can be of immense practical value for the engineers and scientists to have a quick check on natural gas combustion efficiencies for wide range of operating conditions without the necessity of any pilot plant set up and experimental trials. In particular, process and combustion engineers would find the proposed approach to be user friendly involving transparent calculations with no complex expressions for their applications to the design and operation of natural gas-fired systems such as furnaces and boilers. (author)

  6. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    Eliane Lopes Rosado

    2014-03-01

    Full Text Available Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE, compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect calorimetry with respiratory hood. Results: In G1 and G2, it was found that the estimates obtained by Harris-Benedict, Shofield, FAO/WHO/ ONU and Henry & Rees did not differ from EE using indirect calorimetry, which presented higher values than the equations proposed by Owen, Mifflin-St Jeor and Oxford. For G1 and G2 the predictive equation closest to the value obtained by the indirect calorimetry was the FAO/WHO/ONU (7.9% and 0.46% underestimation, respectively, followed by Harris-Benedict (8.6% and 1.5% underestimation, respectively. Conclusion: The equations proposed by FAO/WHO/ ONU, Harris-Benedict, Shofield and Henry & Rees were adequate to estimate the EE in a sample of Brazilian and Spanish women with excess body weight. The other equations underestimated the EE.

  7. Rapid estimation of excess mortality: nowcasting during the heatwave alert in England and Wales in June 2011

    Helen K. Green; Andrews, Nick J; Bickler, Graham; Pebody, Richard G.

    2012-01-01

    Background A Heat-Health Watch system has been established in England and Wales since 2004 as part of the national heatwave plan following the 2003 European-wide heatwave. One important element of this plan has been the development of a timely mortality surveillance system. This article reports the findings and timeliness of a daily mortality model used to ‘nowcast’ excess mortality (utilising incomplete surveillance data to estimate the number of deaths in near-real time) during a heatwave a...

  8. Self-estimation of Body Fat is More Accurate in College-age Males Compared to Females

    HANCOCK, HALLEY L.; Jung, Alan P.; Petrella, John K.

    2012-01-01

    The objective was to determine the effect of gender on the ability to accurately estimate one’s own body fat percentage. Fifty-five college-age males and 99 college-age females participated. Participants estimated their own body fat percent before having their body composition measured using a BOD POD. Participants also completed a modified Social Physique Anxiety Scale (SPAS). Estimated body fat was significantly lower compared to measured body fat percent in females (26.8±5.6% vs. 30.2±7.0%...

  9. Technical note: tree truthing: how accurate are substrate estimates in primate field studies?

    Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J

    2012-04-01

    Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099

  10. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Li Jing

    2014-08-01

    Full Text Available This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL system. An approach that combines angle with time difference of arrival (ATDOA is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS technique is applied in this approach. It achieves the Cramer–Rao lower bound (CRLB when the parameter measurements are subject to small Gaussian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  11. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer;

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the...... experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median...

  12. Accurate performance estimators for information retrieval based on span bound of support vector machines

    2006-01-01

    Support vector machines have met with significant success in the information retrieval field, especially in handling text classification tasks. Although various performance estimators for SVMs have been proposed,these only focus on accuracy which is based on the leave-one-out cross validation procedure. Information-retrieval-related performance measures are always neglected in a kernel learning methodology. In this paper, we have proposed a set of information-retrieval-oriented performance estimators for SVMs, which are based on the span bound of the leave-one-out procedure. Experiments have proven that our proposed estimators are both effective and stable.

  13. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  14. Palirria : Accurate on-line parallelism estimation for adaptive work-stealing

    Varisteas, Georgios; Brorsson, Mats

    2014-01-01

    We present Palirria, a self-adapting work-stealing scheduling method for nested fork/join parallelism that can be used to estimate the number of utilizable workers and self-adapt accordingly. The estimation mechanism is optimized for accuracy, minimizing the requested resources without degrading performance. We implemented Palirria for both the Linux and Barrelfish operating systems and evaluated it on two platforms: a 48-core NUMA multiprocessor and a simulated 32-core system. Compared to st...

  15. A Simple but Accurate Estimation of Residual Energy for Reliable WSN Applications

    Jae Ung Kim; Min Jae Kang; Jun Min Yi; Dong Kun Noh

    2015-01-01

    A number of studies have been actively conducted to address limited energy resources in sensor systems over wireless sensor networks. Most of these studies are based on energy-aware schemes, which take advantage of the residual energy from the sensor system’s own or neighboring nodes. However, existing sensor systems estimate residual energy based solely on voltage and current consumption, leading to inaccurate estimations because the residual energy in real batteries is affected by temperatu...

  16. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  17. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  18. Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform

    Chuho Yi; Jungwon Cho

    2015-01-01

    With the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surrounding environment. In this paper, we describe an ego-motion estimation method using a vision sensor that is widely used in IoT systems. Then, we propose a new fusion method to improve the accuracy of m...

  19. Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106Ru over 106Rh into 106Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized

  20. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  1. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  2. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  3. Application of apriori error estimates for Navier-Stokes equations to accurate finite element solution

    Burda, P.; Novotný, Jaroslav; Šístek, J.

    Tenerife : WSEAS, 2005 - (Thome, N.; Gonzalez, C.; Altin, A.), s. 31-36 ISBN 960-8457-39-4. [WSEAS International Conference on Applied Mathematics /8./. Tenerife (ES), 16.12.2005-18.12.2005] R&D Projects: GA AV ČR(CZ) IAA2120201 Institutional research plan: CEZ:AV0Z20760514 Keywords : apriori estimates * Navier-Stokes equations * finite elements Subject RIV: BA - General Mathematics

  4. An application of apriori and aposteriori error estimates to accurate FEM solution of incompressible flows

    Burda, P.; Novotný, Jaroslav; Šístek, J.

    Plzeň : Západočeská univerzita v Plzni, 2006 - (Marek, I.), s. 7-30 ISBN 80-7043-426-0. [Software and algorithms of numerical mathematics. Srní, Šumava (CZ), 12.09.2006-16.09.2006] R&D Projects: GA AV ČR 1ET400760509 Institutional research plan: CEZ:AV0Z20760514 Keywords : apriori and aposteriori error estimates * finite element method * incompressible flows Subject RIV: BK - Fluid Dynamics

  5. A novel method based on two cameras for accurate estimation of arterial oxygen saturation

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-01-01

    Background Photoplethysmographic imaging (PPGi) that is based on camera allows acquiring photoplethysmogram and measuring physiological parameters such as pulse rate, respiration rate and perfusion level. It has also shown potential for estimation of arterial oxygen saturation (SaO2). However, there are some technical limitations such as optical shunting, different camera sensitivity to different light spectra, different AC-to-DC ratios (the peak-to-peak amplitude to baseline ratio) of the PP...

  6. New Method for Accurate Parameter Estimation of Induction Motors Based on Artificial Bee Colony Algorithm

    Jamadi, Mohammad; Merrikh-Bayat, Farshad

    2014-01-01

    This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...

  7. Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance

    Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2016-01-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...

  8. What is legally defensible and scientifically most accurate approach for estimating the intake of radionuclides in workers?

    Merits and demerits of each technique utilized for determining the intakes of radioactive materials in workers is described with particular emphasis on the intake of thorium, uranium, and plutonium. Air monitoring at work places have certain flaws, which may give erroneous estimates of intake of the radionuclides. Bioassay techniques involve radiochemical determinations of radionuclides in biological samples such as urine, feces etc, and employing biokinetic models to estimate the intake from such measurements. Though, highly sensitive and accurate procedures are available for the determination of these radionuclides biokinetic models employed produce large errors in the estimate. In vivo measurements have fundamental problems of poor sensitivities. Also, due to non-availability of such facilities at most of the nuclear sites transporting workers at different facilities may cost a lot of financial resources. It seems difficult to defend in the court of law that determination of intake of radioactive material in workers by an individual procedure is accurate; at the best these techniques may be employed to obtain only an estimate of intake. (author)

  9. A Prototype-Based Gate-Level Cycle-Accurate Methodology for SoC Performance Exploration and Estimation

    Ching-Lung Su

    2013-01-01

    Full Text Available A prototype-based SoC performance estimation methodology was proposed for consumer electronics design. Traditionally, prototypes are usually used in system verification before SoC tapeout, which is without accurate SoC performance exploration and estimation. This paper attempted to carefully model the SoC prototype as a performance estimator and explore the environment of SoC performance. The prototype met the gate-level cycle-accurate requirement, which covered the effect of embedded processor, on-chip bus structure, IP design, embedded OS, GUI systems, and application programs. The prototype configuration, chip post-layout simulation result, and the measured parameters of SoC prototypes were merged to model a target SoC design. The system performance was examined according to the proposed estimation models, the profiling result of the application programs ported on prototypes, and the timing parameters from the post-layout simulation of the target SoC. The experimental result showed that the proposed method was accompanied with only an average of 2.08% of error for an MPEG-4 decoder SoC at simple profile level 2 specifications.

  10. Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts

    Naruse, Tomohiro; Shibutani, Yoji

    Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.

  11. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  12. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  13. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  14. ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples

    Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar

    2011-01-01

    Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173

  15. Evaluation of the goodness of fit of new statistical size distributions with consideration of accurate income inequality estimation

    Masato Okamoto

    2012-01-01

    This paper compares the goodness-of-fit of two new types of parametric income distribution models (PIDMs), the kappa-generalized (kG) and double-Pareto lognormal (dPLN) distributions, with that of beta-type PIDMs using US and Italian data for the 2000s. A three-parameter model kG tends to estimate the Lorenz curve and income inequality indices more accurately when the likelihood value is similar to that of the beta-type PIDMs. For the first half of the 2000s in the USA, the kG outperforms the...

  16. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  17. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  18. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  19. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  20. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  1. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  2. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  3. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  4. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    M. Montes-Hugo

    2014-06-01

    Full Text Available The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor, EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM for estimating the phytoplankton absorption coefficient at 443 nm (aph(443 and the chlorophyll concentration (chl in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443 estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013. A change on the inversion approach used for estimating aph(443 values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System default values for the optical cross section of phytoplankton (i.e., aph(443 = aph(443/chl = 0.056 m2mg−1, the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443 retrievals and with respect to in situ determinations increased up to 29%.

  5. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  6. The absolute lymphocyte count accurately estimates CD4 counts in HIV-infected adults with virologic suppression and immune reconstitution

    Barnaby Young

    2014-11-01

    Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24

  7. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    Firstly, the new combined error model of cumulative geoid height influenced by four error sources, including the inter-satellite range-rate of an interferometric laser (K-band) ranging system, the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer, is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach. Secondly, the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985 × 10−1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL), and the cumulative geoid height error is 5.825 × 10−2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km. The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward, and the dependability of the combined error model is validated. Finally, the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes. (geophysics, astronomy and astrophysics)

  8. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  9. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26924122

  10. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  11. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  12. Excessive Daytime Sleepiness

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  13. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  14. Methodological extensions of meta-analysis with excess relative risk estimates. Application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95% CI: 0.30-1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1-2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. (author)

  15. Estimating Accurate Relative Spacecraft Angular Position from DSN VLBI Phases Using X-Band Telemetry or DOR Tones

    Bagri, Durgadas S.; Majid, Walid

    2009-01-01

    At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.

  16. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  17. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    Xuebing Yuan

    2015-05-01

    Full Text Available Inertial navigation based on micro-electromechanical system (MEMS inertial measurement units (IMUs has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  18. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  19. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  20. HIV Excess Cancers JNCI

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  1. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  2. Estimates of Long and Short Term Soil Erosion Rates on Farmland in Semi-Arid West Morocco Using Caesium-137, Excess Lead-210 and Beryllium-7 Measurements

    The aim of the present work was to investigate both long and short term soil erosion and deposition rates on agricultural land in Morocco and to assess the effectiveness of soil conservation techniques by the combined use of environmental radionuclides (137Cs, excess 210Pb and 7Be) as tracers The study area is an experimental station located in Marchouch 68 km from Rabat (western part of Morocco). Experimental plots have been installed in the study field to test the no-till practice under cereals as a soil conservation technique, and comparing it to the conventional tillage system. Fallout 137Cs and 210Pbex allowed a retrospective assessment of long term (50 and 100 yrs respectively) soil redistribution rates while fallout 7Be with a short half-life (53 days) was used to document short term soil erosion associated with short rainfall events for different tillage systems and land uses. From 137Cs and 210Pbex measurements the rates of soil redistribution induced by water erosion were quantified by using the Mass Balance 2 model. The net soil erosion rates obtained were 14.3 t ha-1 a-1 and 12.1 ha-1 a-1 for 137Cs and 210Pbex respectively resulting in a high sediment delivery ratio of about 92%. Data on soil redistribution generated by the use of both radionuclides are similar, indicating that the soil erosion rate did not change significantly during the last 100 years. In addition, the soil redistribution rates due to tillage were estimated using the Mass Balance 3 model. The global results obtained from 7Be measurements during the period 2004-2007 suggest that the soil loss is reduced by up to 30% when no-till management is practised when compared to the conventional tillage or uncultivated soil. (author)

  3. A relative risk estimation of excessive frequency of malignant tumors in population due to discharges into the atmosphere from fossil-fuel power plants and nuclear power plants

    Exposure of the population (doses to lungs, bone and whole body) due to fossil-fuel power plants (FFPP) is estimated by the example of a large modern coal FFPP taking into account the contents of 226Ra, 228Ra, 210Pb, 210Po, 40K, 232Th in the fly ash and also radon discharges. The doses produced by radionuclides mentioned above for the individuals from the population living within the range of 18km from the FFPP together with the mean collective doses all over the territory of the country used for the agricultural purposes are given. These values are compared with literary data on the doses due to discharges into the atmosphere of inert radioactive gases, 60Co, 137Cs, 90Sr and 131I from nuclear power plants (NPP). It is revealed that the total exposure risk for the near-by population due to fly ash from coal FFPP is greater by about 2 orders than the risk for individuals from the population due to the discharges from NPP at normal operating conditions. The doses produced by the discharges from FFPP working on oil are lower by 1 order than the doses due to the discharges from coal FFPP. The risk of excessive cancer frequency due to chemical carcinogens contained in the discharges from FFPP including some metals is discussed. It is noted that a more complete evaluation of the risk from NPP requires the data on the doses to the population from all the cycles of nuclear fuel production and radioactive waste disposal as well as the predicted information on collective doses per power unit of NPP due to an accident

  4. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  5. Towards accurate dose accumulation for step- and -shoot IMRT. Impact of weighting schemes and temporal image resolution on the estimation of dosimetric motion effects

    Purpose: Breathing-induced motion effects on dose distributions in radiotherapy can be analyzed using 4D CT image sequences and registration-based dose accumulation techniques. Often simplifying assumptions are made during accumulation. In this paper, we study the dosimetric impact of two aspects which may be especially critical for IMRT treatment: the weighting scheme for the dose contributions of IMRT segments at different breathing phases and the temporal resolution of 4D CT images applied for dose accumulation. Methods: Based on a continuous problem formulation a patient- and plan-specific scheme for weighting segment dose contributions at different breathing phases is derived for use in step- and -shoot IMRT dose accumulation. Using 4D CT data sets and treatment plans for 5 lung tumor patients, dosimetric motion effects as estimated by the derived scheme are compared to effects resulting from a common equal weighting approach. Effects of reducing the temporal image resolution are evaluated for the same patients and both weighting schemes. Results: The equal weighting approach underestimates dosimetric motion effects when considering single treatment fractions. Especially interplay effects (relative misplacement of segments due to respiratory tumor motion) for IMRT segments with only a few monitor units are insufficiently represented (local point differences > 25% of the prescribed dose for larger tumor motion). The effects, however, tend to be averaged out over the entire treatment course. Regarding temporal image resolution, estimated motion effects in terms of measures of the CTV dose coverage are barely affected (in comparison to the full resolution) when using only half of the original resolution and equal weighting. In contrast, occurence and impact of interplay effects are poorly captured for some cases (large tumor motion, undersized PTV margin) for a resolution of 10/14 phases and the more accurate patient- and plan-specific dose accumulation scheme

  6. Attenuation studies of beta particles in glass, PVC, stainless steel for accurate activity estimation of 32P coated to coronary stents

    Restenosis or closure of the artery due to wound healing is one of the problems in post-coronary intervention, such as angioplasty. Intravascular irradiation with beta particles has shown to prevent restenosis to a good extent. In particular, beta particles emitting radioisotopes such as 32P and 90Sr are ideal for local irradiation as 95% dose in tissues is delivered within 4 mm of the source position. 32P coated stents are in use for intravascular brachytherapy source due to its high dose rate delivery to the exposed tissues. The high radiation dose rate delivered by such intravascular radioactive stent in a short span of implantation is an appealing approach to prevent restenosis by non-selectively killing dividing cells. Radiation dose of the order of 15 to 20 Gy is needed to be delivered to the tissues by the implanted 32P stent of about 150-222 KBq activity for prevention of restenosis. However, the accuracy of the dose delivered to the implanted tissues depends on how accurately the activity of 32P in the stent is measured. The quantification of activity is done by using a teletector prior to dispatch of the stents to hospital. In the present paper, the dose measured with different materials which are used for estimating the 32P leached from radioactive stents and correlated it with the direct measurement done by using teletector is given

  7. High-resolution TanDEM-X DEM: An accurate method to estimate lava flow volumes at Nyamulagira Volcano (D. R. Congo)

    Albino, F.; Smets, B.; d'Oreye, N.; Kervyn, F.

    2015-06-01

    Nyamulagira and Nyiragongo are two of the most active volcanoes in Africa, but their eruptive histories are poorly known. Assessing lava flow volumes in the region remains difficult, as field surveys are often impossible and available Digital Elevation Models (DEMs) do not have adequate spatial or temporal resolutions. We therefore use TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) interferometry to produce a series of 0.15 arc sec (˜5 m) DEMs from between 2011 and 2012 over these volcanoes. TanDEM-X DEMs have an absolute vertical accuracy of 1.6 m, resulting from the comparison of elevation with GPS measurements acquired around Nyiragongo. The difference between TanDEM-X-derived DEMs from before and after the 2011-2012 eruption of Nyamulagira provides an accurate thickness map of the lava flow emplaced during that activity. Values range from 3 m along the margins to 35 m in the middle, with a mean of 12.7 m. The erupted volume is 305.2 ± 36.0 × 106 m3. Height errors on thickness depend on the land covered by the flow and range from 0.4 m in old lavas to 5.5 m in dense vegetation. We also reevaluate the volume of historical eruptions at Nyamulagira since 2001 from the difference between TanDEM-X and SRTM 1 arc sec DEMs and compare them to previous work. Planimetric methods used in literature are consistent with our results for short-duration eruptions but largely underestimate the volume of the long-lived 2011-2012 eruption. Our new estimates of erupted volumes suggest that the mean eruption rate and the magma supply rate were relatively constant at Nyamulagira during 2001-2012, respectively, 23.1 m3 s-1 and 0.9 m3 s-1.

  8. Excessive Sweating (Hyperhidrosis)

    ... and rashes clinical tools newsletter | contact Share | Excessive Sweating (Hyperhidrosis) Information for adults A A A Profusely sweaty ... the medical name for excessive sweating, involves overactive sweat glands, usually of a defined body part, including ...

  9. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations.

    McMillan, K; Bostani, M; McCollough, C; McNitt-Gray, M

    2015-01-01

    PURPOSE: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. METHODS: For 10 patients who received clinically-indicated chest (n=5) and ab...

  10. On excesses of frames

    Bakić, Damir; Berić, Tomislav

    2014-01-01

    We show that any two frames in a separable Hilbert space that are dual to each other have the same excess. Some new relations for the analysis resp. synthesis operators of dual frames are also derived. We then prove that pseudo-dual frames and, in particular, approximately dual frames have the same excess. We also discuss various results on frames in which excesses of frames play an important role.

  11. Is accurate and reliable blood loss estimation the 'crucial step' in early detection of postpartum haemorrhage: an integrative review of the literature

    Hancock, Angela; Weeks, Andrew D; Lavender, Dame Tina

    2015-01-01

    Background Postpartum haemorrhage (PPH) is the leading cause of maternal mortality in low-income countries and severe maternal morbidity in many high-income countries. Poor outcomes following PPH are often attributed to delays in the recognition and treatment of PPH. Experts have suggested that improving the accuracy and reliability of blood loss estimation is the crucial step in preventing death and morbidity from PPH. However, there is little guidance on how this can be achieved. The aim of...

  12. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237

  13. Excess wind power

    Østergaard, Poul Alberg

    2005-01-01

    analyses it is analysed how excess productions are better utilised; through conversion into hydrogen of through expansion of export connections thereby enabling sales. The results demonstrate that particularly hydrogen production is unviable under current costs but transmission expansion could be...

  14. Is the predicted postoperative FEV1 estimated by planar lung perfusion scintigraphy accurate in patients undergoing pulmonary resection? Comparison of two processing methods

    Estimation of postoperative forced expiratory volume in 1 s (FEV1) with radionuclide lung scintigraphy is frequently used to define functional operability in patients undergoing lung resection. We conducted a study to outline the reliability of planar quantitative lung perfusion scintigraphy (QLPS) with two different processing methods to estimate the postoperative lung function in patients with resectable lung disease. Forty-one patients with a mean age of 57±12 years who underwent either a pneumonectomy (n=14) or a lobectomy (n=27) were included in the study. QLPS with Tc-99m macroaggregated albumin was performed. Both three equal zones were generated for each lung [zone method (ZM)] and more precise regions of interest were drawn according to their anatomical shape in the anterior and posterior projections [lobe mapping method (LMM)] for each patient. The predicted postoperative (ppo) FEV1 values were compared with actual FEV1 values measured on postoperative day 1 (pod1 FEV1) and day 7 (pod7 FEV1). The mean of preoperative FEV 1 and ppoFEV1 values was 2.10±0.57 and 1.57±0.44 L, respectively. The mean of Pod1FEV1 (1.04±0.30 L) was lower than ppoFEV1 (p0.05). PpoFEV1 values predicted by both the zone and LMMs overestimated the actual measured lung volumes in patients undergoing pulmonary resection in the early postoperative period. LMM is not superior to ZM. (author)

  15. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD50, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  16. 证券市场指数的尖峰厚尾特征与风险量估计%The Characteristics of Excess Kurtosis and Heavy-tail of Stock Market Indies and the Estimation of Risk Measures

    杨昕

    2012-01-01

    The empirical analysis of the four stock market indies of DOW,Nasdaq,S&P500 and FTSE100 illustrates that the distributions of the log returns have the characteristics of excess kurtosis and heavy-tail,and that Logistic distribution fits very well for the returns. At the same time,the estimation formulas of the risk measures VaR and CVaR based on Logistics distribution are given,and the risk measures of the log returns of the four stock market indies are reported.%在对DOW,Nasdaq,S&P500和FTSE100等四个证券市场指数进行实证分析基础上,展示了证券市场指数的对数收益率具有尖峰厚尾的分布特征,并利用Logistic分布得到了很好的拟合,同时给出了基于Logistic分布的风险量VaR和CVaR的估计公式,以此计算证券市场指数的对数收益率的风险量VaR和CVaR的估计值.

  17. Reducing Excessive Television Viewing.

    Jason, Leonard A.; Rooney-Rebeck, Patty

    1984-01-01

    A youngster who excessively watched television was placed on a modified token economy: earned tokens were used to activate the television for set periods of time. Positive effects resulted in the child's school work, in the amount of time his family spent together, and in his mother's perception of family social support. (KH)

  18. Accurate determination of antenna directivity

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  19. Accurate estimate of the critical exponent $\

    Clisby, Nathan

    2010-01-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to $33 \\times 10^6$ steps. Consequently the critical exponent $\

  20. 战斗机战术技术要求的分析和全机重量的计算%An Accurate Estimate of Take-off Weight of Fighter Aircraft at Conceptual Design Stage

    杨华保

    2001-01-01

    在对战术技术要求指标分类的基础上,提出了用重量分析法进行战术技术要求分析的方法;对全机重量的计算进行了研究,提出了用统计方法计算空机重量和用对飞行剖面分段计算燃油重量的方法。经过实际型号的验算,证明方法可行,结果令人满意。%I developed a software for estimating take-off weight of fighter aircraft according to tactical and technical requirements at the conceptual design stage. Take-off weight consists of empty weight, fuel weight and payload weight. Estimating take-off weight is an iterative process, as many factors including empty weight, fuel weight and tactical and technical requirements, are related to take-off weight. In my software, empty weight can be calculated from take-off weight by a formula deduced from statistical data relating empty weight to take-off weight. In estimating fuel weight, I divide the mission profile into several basic segments and calculate the fuel consumption of each basic segment. My software can make the iterative process of accurately estimating take-off weight of a fighter aircraft quite easy. Data of some existing fighters confirm preliminarily that my software is reliable.

  1. The Virtual Diphoton Excess

    Stolarski, Daniel

    2016-01-01

    Interpreting the excesses around 750 GeV in the diphoton spectra to be the signal of a new heavy scalar decaying to photons, we point out the possibility of looking for correlated signals with virtual photons. In particular, we emphasize that the effective operator that generates the diphoton decay will also generate decays to two leptons and a photon, as well as to four leptons, independently of the new resonance couplings to $Z\\gamma$ and $ZZ$. Depending on the relative sizes of these effective couplings, we show that the virtual diphoton component can make up a sizable, and sometimes dominant, contribution to the total $2\\ell \\gamma$ and $4\\ell$ partial widths. We also discuss modifications to current experimental cuts in order to maximize the sensitivity to these virtual photon effects. Finally, we briefly comment on prospects for channels involving other Standard Model fermions as well as more exotic decay possibilities of the putative resonance.

  2. Abundance, Excess, Waste

    Rox De Luca

    2016-02-01

    Her recent work focuses on the concepts of abundance, excess and waste. These concerns translate directly into vibrant and colourful garlands that she constructs from discarded plastics collected on Bondi Beach where she lives. The process of collecting is fastidious, as is the process of sorting and grading the plastics by colour and size. This initial gathering and sorting process is followed by threading the components onto strings of wire. When completed, these assemblages stand in stark contrast to the ease of disposability associated with the materials that arrive on the shoreline as evidence of our collective human neglect and destruction of the environment around us. The contrast is heightened by the fact that the constructed garlands embody the paradoxical beauty of our plastic waste byproducts, while also evoking the ways by which those byproducts similarly accumulate in randomly assorted patterns across the oceans and beaches of the planet.

  3. The High Price of Excessive Alcohol Consumption

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  4. Tentativas de suicídio: fatores prognósticos e estimativa do excesso de mortalidade Tentativas de suicidio: factores pronósticos y estimativa del exceso de mortalidad Attempted suicide: prognostic factors and estimated excess mortality

    Carlos Eduardo Leal Vidal

    2013-01-01

    hombres, en las personas casadas y en aquellos con edad superior a los 60 años. La razón de mortalidad estandarizada evidenció un exceso de mortalidad por suicidio. Los resultados del estudio mostraron que la tasa de mortalidad entre pacientes que intentaron el suicidio fue superior a la esperada en la población general, indicando la necesidad de mejorar los cuidados a la salud de esos individuos.This retrospective cohort study aimed to analyze the epidemiological profile of individuals that attempted suicide from 2003 to 2009 in Barbacena, Minas Gerais State, Brazil, to calculate the mortality rate from suicide and other causes, and to estimate the risk of death in these individuals. Data were collected from police reports and death certificates. Survival analysis was performed and Cox multiple regression was used. Among the 807 individuals that attempted suicide, there were 52 deaths: 12 by suicide, 10 from external causes, and 30 from other causes. Ninety percent of suicide deaths occurred within 24 months after the attempt. Risk of death was significantly greater in males, married individuals, and individuals over 60 years of age. Standardized mortality ratio showed excess mortality by suicide. The findings showed that the mortality rate among patients that had attempted suicide was higher than expected in the general population, indicating the need to improve health care for these individuals.

  5. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  6. Multidetector row computed tomography may accurately estimate plaque vulnerability: does MDCT accurately estimate plaque vulnerability? (Pro).

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal ECG and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. PMID:21532180

  7. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  8. Does Excessive Pronation Cause Pain?

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M;

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  9. Does Excessive Pronation Cause Pain?

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  10. Does excessive pronation cause pain?

    Olesen, Christian Gammelgaard; Nielsen, R.G.; Rathleff, M.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  11. Student estimations of peer alcohol consumption

    Stock, Christiane; Mcalaney, John; Pischke, Claudia;

    2014-01-01

    BACKGROUND: The Social Norms Approach, with its focus on positive behaviour and its consensus orientation, is a health promotion intervention of relevance to the context of a Health Promoting University. In particular, the approach could assist with addressing excessive alcohol consumption. AIM......: This article aims to discuss the link between the Social Norms Approach and the Health Promoting University, and analyse estimations of peer alcohol consumption among European university students. METHODS: A total of 4392 students from universities in six European countries and Turkey were asked to report...... their own typical alcohol consumption per day and to estimate the same for their peers of same sex. Students were classified as accurate or inaccurate estimators of peer alcohol consumption. Socio-demographic factors and personal alcohol consumption were examined as predictors for an accurate estimation...

  12. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Widespread Excess Ice in Arcadia Planitia, Mars

    Bramson, Ali M; Putzig, Nathaniel E; Sutton, Sarah; Plaut, Jeffrey J; Brothers, T Charles; Holt, John W

    2015-01-01

    The distribution of subsurface water ice on Mars is a key constraint on past climate, while the volumetric concentration of buried ice (pore-filling versus excess) provides information about the process that led to its deposition. We investigate the subsurface of Arcadia Planitia by measuring the depth of terraces in simple impact craters and mapping a widespread subsurface reflection in radar sounding data. Assuming that the contrast in material strengths responsible for the terracing is the same dielectric interface that causes the radar reflection, we can combine these data to estimate the dielectric constant of the overlying material. We compare these results to a three-component dielectric mixing model to constrain composition. Our results indicate a widespread, decameters-thick layer that is excess water ice ~10^4 km^3 in volume. The accumulation and long-term preservation of this ice is a challenge for current Martian climate models.

  14. Quantum theory of excess noise

    Bardroff, P. J.; Stenholm, S

    1999-01-01

    We analyze the excess noise in the framework of the conventional quantum theory of laser-like systems. Our calculation is conceptually simple and our result also shows a correction to the semi-classical result derived earlier.

  15. Excess mortality following hip fracture

    Abrahamsen, B; van Staa, T; Ariely, R;

    2009-01-01

    Summary This systematic literature review has shown that patients experiencing hip fracture after low-impact trauma are at considerable excess risk for death compared with nonhip fracture/community control populations. The increased mortality risk may persist for several years thereafter......, highlighting the need for interventions to reduce this risk.Patients experiencing hip fracture after low-impact trauma are at considerable risk for subsequent osteoporotic fractures and premature death. We conducted a systematic review of the literature to identify all studies that reported unadjusted and...... excess mortality rates for hip fracture. Although a lack of consistent study design precluded any formal meta-analysis or pooled analysis of the data, we have shown that hip fracture is associated with excess mortality (over and above mortality rates in nonhip fracture/community control populations...

  16. Pricing Excess-of-loss Reinsurance Contracts Against Catastrophic Loss

    J. David Cummins; Lewis, Christopher M.; Phillips, Richard D.

    1998-01-01

    This paper develops a pricing methodology and pricing estimates for the proposed Federal excess-of- loss (XOL) catastrophe reinsurance contracts. The contracts, proposed by the Clinton Administration, would provide per-occurrence excess-of-loss reinsurance coverage to private insurers and reinsurers, where both the coverage layer and the fixed payout of the contract are based on insurance industry losses, not company losses. In financial terms, the Federal government would be selling earthqua...

  17. Syndromes that Mimic an Excess of Mineralocorticoids.

    Sabbadin, Chiara; Armanini, Decio

    2016-09-01

    Pseudohyperaldosteronism is characterized by a clinical picture of hyperaldosteronism with suppression of renin and aldosterone. It can be due to endogenous or exogenous substances that mimic the effector mechanisms of aldosterone, leading not only to alterations of electrolytes and hypertension, but also to an increased inflammatory reaction in several tissues. Enzymatic defects of adrenal steroidogenesis (deficiency of 17α-hydroxylase and 11β-hydroxylase), mutations of mineralocorticoid receptor (MR) and alterations of expression or saturation of 11-hydroxysteroid dehydrogenase type 2 (apparent mineralocorticoid excess syndrome, Cushing's syndrome, excessive intake of licorice, grapefruits or carbenoxolone) are the main causes of pseudohyperaldosteronism. In these cases treatment with dexamethasone and/or MR-blockers is useful not only to normalize blood pressure and electrolytes, but also to prevent the deleterious effects of prolonged over-activation of MR in epithelial and non-epithelial tissues. Genetic alterations of the sodium channel (Liddle's syndrome) or of the sodium-chloride co-transporter (Gordon's syndrome) cause abnormal sodium and water reabsorption in the distal renal tubules and hypertension. Treatment with amiloride and thiazide diuretics can respectively reverse the clinical picture and the renin aldosterone system. Finally, many other more common situations can lead to an acquired pseudohyperaldosteronism, like the expansion of volume due to exaggerated water and/or sodium intake, and the use of drugs, as contraceptives, corticosteroids, β-adrenergic agonists and FANS. In conclusion, syndromes or situations that mimic aldosterone excess are not rare and an accurate personal and pharmacological history is mandatory for a correct diagnosis and avoiding unnecessary tests and mistreatments. PMID:27251484

  18. Determination of Enantiomeric Excess of Glutamic Acids by Lab-made Capillary Array Electrophoresis

    Jun WANG; Kai Ying LIU; Li WANG; Ji Ling BAI

    2006-01-01

    Simulated enantiomeric excess of glutamic acid was determined by a lab-made sixteen-channel capillary array electrophoresis with confocal fluorescent rotary scanner. The experimental results indicated that the capillary array electrophoresis method can accurately determine the enantiomeric excess of glutamic acid and can be used for high-throughput screening system for combinatorial asymmetric catalysis.

  19. Total Liability for Excessive Harm

    Cooter, Robert D; Porat, Ariel

    2004-01-01

    In many circumstances, the total harm caused by everyone is verifiable, and the harm caused by each individual is unverifiable. For example, the environmental agency can measure the total harm caused by pollution much easier than it can measure the harm caused by each individual polluter. In these circumstances, implementing the usual liability rules or externality taxes is impossible. We propose a novel solution: Hold each participant in the activity responsible for all of the excessive harm...

  20. Diphoton Excess through Dark Mediators

    Chen, Chien-Yi; Pospelov, Maxim; Zhong, Yi-Ming

    2016-01-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated $e^+e^-$ pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: $gg \\to S\\to A'A'\\to (e^+e^-)(e^+e^-)$ and $q\\bar q \\to Z' \\to sa\\to (e^+e^-)(e^+e^-)$, where at the first step a heavy scalar, $S$, or vector, $Z'$, resonances are produced that decay to light metastable vectors, $A'$, or (pseudo-)scalars, $s$ and $a$. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heav...

  1. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  2. Study of accurate volume measurement system for plutonium nitrate solution

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  3. Excess Returns and Systemic Risk for Chile and Mexico Excess Returns and Systemic Risk for Chile and Mexico

    Paul D. McNelis

    2000-03-01

    Full Text Available This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. The results show no evidence of a significant reduction in systemic risk rather excess returns have remained volatile for both countries. For Chile, excess returns are significantly related to own lagged levels, while for Mexico excess are significantly related to own lagged variances. The influence of global factors are relatively minimal compared to potential home factors. This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. The results show no evidence of a significant reduction in systemic risk rather excess returns have remained volatile for both countries. For Chile, excess returns are significantly related to own lagged levels, while for Mexico excess are significantly related to own lagged variances. The influence of global factors are relatively minimal compared to potential home factors.

  4. Excess Early Mortality in Schizophrenia

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality as...... well as possible ways to reduce it. Persons with schizophrenia have an exceptionally short life expectancy. High mortality is found in all age groups, resulting in a life expectancy of approximately 20 years below that of the general population. Evidence suggests that persons with schizophrenia may not...

  5. Evaluation of Excess Thermodynamic Parameters in a Binary Liquid Mixture (Cyclohexane + O-Xylene) at Different Temperatures

    K. Narendra; Narayanamurthy, P.; CH. Srinivasu

    2010-01-01

    The ultrasonic velocity, density and viscosity in binary liquid mixture cyclohexane with o-xylene have been determined at different temperatures from 303.15 to 318.15 K over the whole composition range. The data have been utilized to estimate the excess adiabatic compressibility (βE), excess volumes (VE), excess intermolecular free length (LfE), excess internal pressure (πE) and excess enthalpy (HE) at the above temperatures. The excess values have been found to be useful in estimating the st...

  6. Outflows in Sodium Excess Objects

    Park, Jongwon; Yi, Sukyoung K

    2015-01-01

    van Dokkum and Conroy revisited the unexpectedly strong Na I lines at 8200 A found in some giant elliptical galaxies and interpreted it as evidence for unusually bottom-heavy initial mass function. Jeong et al. later found a large population of galaxies showing equally-extraordinary Na D doublet absorption lines at 5900 A (Na D excess objects: NEOs) and showed that their origins can be different for different types of galaxies. While a Na D excess seems to be related with the interstellar medium (ISM) in late-type galaxies, smooth-looking early-type NEOs show little or no dust extinction and hence no compelling sign of ISM contributions. To further test this finding, we measured the doppler components in the Na D lines. We hypothesized that ISM would have a better (albeit not definite) chance of showing a blueshift doppler departure from the bulk of the stellar population due to outflow caused by either star formation or AGN activities. Many of the late-type NEOs clearly show blueshift in their Na D lines, wh...

  7. Excess electron transport in cryoobjects

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  8. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  9. [Iodine excess induced thyroid dysfunction].

    Egloff, Michael; Philippe, Jacques

    2016-04-20

    The principle sources of iodine overload, amiodarone and radiologic contrast media, are frequently used in modern medicine. The thyroid gland exerts a protective effect against iodine excess by suppressing iodine internalization into the thyrocyte and iodine organification, the Wolff-Chaikoff effect. Insufficiency of this effect or lack of escape from it leads to hypo- or hyperthyroidism respectively. Amiodarone induced thyrotoxicosis is a complex condition marked by two different pathophysiological mechanisms with different treatments. Thyroid metabolism changes after exposure to radiologic contrast media are frequent, but they rarely need to be treated. High risk individuals need to be identifed in order to delay the exam or to monitor thyroid function or apply prophylactic measures in selected cases. PMID:27276725

  10. Excess water dynamics in hydrotalcite: QENS study

    S Mitra; A Pramanik; D Chakrabarty; R Mukhopadhyay

    2004-08-01

    Results of the quasi-elastic neutron scattering (QENS) measurements on the dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random translational jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the change in binding of the water molecules to the host layer.

  11. 10 CFR 904.10 - Excess energy.

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  12. 7 CFR 985.56 - Excess oil.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified...

  13. A Discussion on Mean Excess Plots

    Ghosh, Souvik; Resnick, Sidney I.

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  14. THE IMPORTANCE OF THE STANDARD SAMPLE FOR ACCURATE ESTIMATION OF THE CONCENTRATION OF NET ENERGY FOR LACTATION IN FEEDS ON THE BASIS OF GAS PRODUCED DURING THE INCUBATION OF SAMPLES WITH RUMEN LIQUOR

    T ŽNIDARŠIČ

    2003-10-01

    Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.

  15. Average Potential Temperature of the Upper Mantle and Excess Temperatures Beneath Regions of Active Upwelling

    Putirka, K. D.

    2006-05-01

    The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and

  16. Phytoextraction of excess soil phosphorus

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  17. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in [Department of Chemistry, Indian Institute of Technology Delhi, New Delhi 110016 (India); Nguyen, Andrew Huy; Molinero, Valeria [Department of Chemistry, University of Utah, Salt Lake City, Utah 84112-0850 (United States); Singh, Murari [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Khatua, Prabir; Bandyopadhyay, Sanjoy [Department of Chemistry, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India)

    2015-10-28

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip

  18. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a

  19. 75 FR 27572 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income

    2010-05-17

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income AGENCY... subject proposal. Project owners are permitted to retain Excess Income for projects under terms and conditions established by HUD. Owners must request to retain some or all of their Excess Income. The...

  20. 26 CFR 54.4981A-1T - Tax on excess distributions and excess accumulations (temporary).

    2010-04-01

    ... same for all individuals. (b) Base for excise tax under grandfather rule. Although the portion of any... 26 Internal Revenue 17 2010-04-01 2010-04-01 false Tax on excess distributions and excess... Tax on excess distributions and excess accumulations (temporary). The following questions and...

  1. Patterns of Excess Cancer Risk among the Atomic Bomb Survivors

    Pierce, Donald A.

    1996-05-01

    I will indicate the major epidemiological findings regarding excess cancer among the atomic-bomb survivors, with some special attention to what can be said about low-dose risks. This will be based on 1950--90 mortality follow-up of about 87,000 survivors having individual radiation dose estimates. Of these about 50,000 had doses greater than 0.005 Sv, and the remainder serve largely as a comparison group. It is estimated that for this cohort there have been about 400 excess cancer deaths among a total of about 7800. Since there are about 37,000 subjects in the dose range .005--.20 Sv, there is substantial low-dose information in this study. The person-year-Seivert for the dose range under .20 Sv is greater than for any one of the 6 study cohorts of U.S., Canadian, and U.K. nuclear workers; and is equal to about 60% of the total for the combined cohorts. It is estimated, without linear extrapolation from higher doses, that for the RERF cohort there have been about 100 excess cancer deaths in the dose range under .20 Sv. Both the dose-response and age-time patterns of excess risk are very different for solid cancers and leukemia. One of the most important findings has been that the solid cancer (absolute) excess risk has steadily increased over the entire follow-up to date, similarly to the age-increase of the background risk. About 25% of the excess solid cancer deaths occurred in the last 5 years of the 1950--90 follow-up. On the contrary most of the excess leukemia risk occurred in the first few years following exposure. The observed dose response for solid cancers is very linear up to about 3 Sv, whereas for leukemia there is statistically significant upward curvature on that range. Very little has been proposed to explain this distinction. Although there is no hint of upward curvature or a threshold for solid cancers, the inherent difficulty of precisely estimating very small risks along with radiobiological observations that many radiation effects are nonlinear

  2. Estimation of excess numbers of viral diarrheal cases among children aged <5 years in Beijing with adjusted Serfling regression model%应用调整Serfling回归模型估计北京市5岁以下儿童病毒性腹泻相关超额病例数

    贾蕾; 王小莉; 吴双胜; 马建新; 李洪军; 高志勇; 王全意

    2016-01-01

    Objective To estimate the excess numbers of viral diarrheal cases among children aged <5 years in Beijing from 1 January 2011 to 23 May 2015.Methods The excess numbers of diarrheal cases among the children aged <5 years were estimated by using weekly outpatient visit data from two children' s hospital in Beijing and adjusted Serfling regression model.Results The incidence peaks of viral diarrhea were during 8th-10th week and 40th-42nd week in 2011,40th-4th week in 2012,43rd-49th week in 2013 and 45th week in 2014 to 11th week in 2015 respectively.The excess numbers of viral diarrheal cases among children aged <5 years in the two children's hospital were 911(95%CI:261-1 561),1 998(95%CI:1 250-2 746),1 645 (95%CI:891-2 397),2 806(95%CI:1 938-3 674) and 1 822(95% CI:614-3 031) respectively,accounting for 40.38% (95% CI:11.57%-69.19%),44.21%(95%CI:27.66%-60.77%),45.08%(95%CI:24.42%-65.69%),60.87% (95%CI:42.04%-79.70%) and 66.62% (95%CI:22.45%-110.82%) of total outpatient visits due to diarrhea during 2011-2015,respectively.Totally,the excess number of viral diarrheal cases among children aged <5 years in Beijing was estimated to be 18 731(95%CI:10 106-27 354) from 2011 to 23 May 2015.Conclusions Winter is the season of viral diarrhea for children aged <5 years.The adjusted Serfling regression model analysis suggested that close attention should be paid to the etiologic variation of viruses causing acute gastroenteritis,especially the etiologic variation of norovirus.%目的 构建调整Serf ling回归模型,估计北京市2011年至2015年5月23日病毒性腹泻相关<5岁儿童超额腹泻病例数.方法 利用北京市2家儿童专科医院<5岁儿童急性腹泻周就诊病例数,拟合调整Serf ling回归模型,估计其病毒性腹泻相关超额病例数.结果 北京市2011年第8~10周、40~ 42周,2012年第40~46周,2013年第43~49周,2014年第45周直到2015年第11

  3. Towards accurate emergency response behavior

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  4. Is the PAMELA Positron Excess Winos?

    Grajek, Phill; Phalen, Dan; Pierce, Aaron; Watson, Scott

    2008-01-01

    Recently the PAMELA satellite-based experiment reported an excess of galactic positrons that could be a signal of annihilating dark matter. The PAMELA data may admit an interpretation as a signal from a wino-like LSP of mass about 200 GeV, normalized to the local relic density, and annihilating mainly into W-bosons. This possibility requires the current conventional estimate for the energy loss rate of positrons be too large by roughly a factor of five. Data from anti-protons and gamma rays also provide tension with this interpretation, but there are significant astrophysical uncertainties associated with their propagation. It is not unreasonable to take this well-motivated candidate seriously, at present, in part because it can be tested in several ways soon. The forthcoming PAMELA data on higher energy positrons and the FGST (formerly GLAST) data, should provide important clues as to whether this scenario is correct. If correct, the wino interpretation implies a cosmological history in which the dark matter...

  5. Age at menarche in schoolgirls with and without excess weight

    Silvia D. Castilho; Luciana B. Nucci

    2015-01-01

    OBJECTIVE: To evaluate the age at menarche of girls, with or without weight excess, attending private and public schools in a city in Southeastern Brazil. METHODS: This was a cross-sectional study comparing the age at menarche of 750 girls from private schools with 921 students from public schools, aged between 7 and 18 years. The menarche was reported by the status quo method and age at menarche was estimated by logarithmic transformation. The girls were grouped according to body mass index ...

  6. Association between regular exercise and excessive newborn birth weight

    Owe, Katrine M.; Nystad, Wenche; Bø, Kari

    2009-01-01

    OBJECTIVE: To estimate the association between regular exercise before and during pregnancy and excessive newborn birth weight. METHODS: Using data from the Norwegian Mother and Child Cohort Study, 36,869 singleton pregnancies lasting at least 37 weeks were included. Information on regular exercise was based on answers from two questionnaires distributed in pregnancy weeks 17 and 30. Linkage to the Medical Birth Registry of Norway provided data on newborn birth weight. The main outcome me...

  7. Factors associated with excessive polypharmacy in older people

    Walckiers, Denise; Van der Heyden, Johan; Tafforeau, Jean

    2015-01-01

    Background Older people are a growing population. They live longer, but often have multiple chronic diseases. As a consequence, they are taking many different kind of medicines, while their vulnerability to pharmaceutical products is increased. The objective of this study is to describe the medicine utilization pattern in people aged 65 years and older in Belgium, and to estimate the prevalence and the determinants of excessive polypharmacy. Methods Data were used from the Belgian Health Inte...

  8. Initial report on characterization of excess highly enriched uranium

    NONE

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  9. Initial report on characterization of excess highly enriched uranium

    DOE's Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path

  10. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy. PMID:26720281

  11. Factors Associated With High Sodium Intake Based on Estimated 24-Hour Urinary Sodium Excretion

    Hong, Jae Won; Noh, Jung Hyun; Kim, Dong-Jun

    2016-01-01

    Abstract Although reducing dietary salt consumption is the most cost-effective strategy for preventing progression of cardiovascular and renal disease, policy-based approaches to monitor sodium intake accurately and the understanding factors associated with excessive sodium intake for the improvement of public health are lacking. We investigated factors associated with high sodium intake based on the estimated 24-hour urinary sodium excretion, using data from the 2009 to 2011 Korea National H...

  12. Excess Returns and Systemic Risk for Chile and Mexico Excess Returns and Systemic Risk for Chile and Mexico

    McNelis, Paul D.; Guay C. Lim

    2000-01-01

    This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. T...

  13. Excessive internet use among European children

    Smahel, David; Helsper, Ellen; Green, Lelia; Kalmus, Veronika; Blinka, Lukas; Ólafsson, Kjartan

    2012-01-01

    This report presents new findings and further analysis of the EU Kids Online 25 country survey regarding excessive use of the internet by children. It shows that while a number of children (29%) have experienced one or more of the five components associated with excessive internet use, very few (1%) can be said to show pathological levels of use.

  14. Excessive libido in a woman with rabies.

    Dutta, J. K.

    1996-01-01

    Rabies is endemic in India in both wildlife and humans. Human rabies kills 25,000 to 30,000 persons every year. Several types of sexual manifestations including excessive libido may develop in cases of human rabies. A laboratory proven case of rabies in an Indian woman who manifested excessive libido is presented below. She later developed hydrophobia and died.

  15. The excessively crying infant : etiology and treatment

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause i

  16. Effects of excess ground ice on projections of permafrost in a warming climate

    In permafrost soils, ‘excess ice’, also referred to as ground ice, exists in amounts exceeding soil porosity in forms such as ice lenses and wedges. Here, we incorporate a simple representation of excess ice in the Community Land Model (CLM4.5) to investigate how excess ice affects projected permafrost thaw and associated hydrologic responses. We initialize spatially explicit excess ice obtained from the Circum-Arctic Map of Permafrost and Ground-Ice Conditions. The excess ice in the model acts to slightly reduce projected soil warming by about 0.35 °C by 2100 in a high greenhouse gas emissions scenario. The presence of excess ice slows permafrost thaw at a given location with about a 10 year delay in permafrost thaw at 3 m depth at most high excess ice locations. The soil moisture response to excess ice melt is transient and depends largely on the timing of thaw with wetter/saturated soil moisture conditions persisting slightly longer due to delayed post-thaw drainage. Based on the model projections of excess ice melt, we can estimate spatially explicit gridcell mean surface subsidence with values ranging up to 0.5 m by 2100 depending on the initial excess ice content and the extent of melt. (letter)

  17. Thermophysical and excess properties of hydroxamic acids in DMSO

    Graphical abstract: Excess molar volumes (VE) vs mole fraction (x2) of (A) N-o-tolyl-2-nitrobenzo- and (B) N-o-tolyl-4-nitrobenzo-hydroxamic acids in DMSO at different temperatures: ■, 298.15 K; ▪, 303.15 K; ▪, 308.15 K; ▪, 313.15 K; and ▪, 318.15 K. Highlights: ► ρ, n of the system hydroxamic acids in DMSO are reported. ► Apparent molar volume indicates superior solute–solvent interactions. ► Limiting apparent molar expansibility and coefficient of thermal expansion. ► Behaviour of this parameter suggest to hydroxamic acids act as structure maker. ► The excess properties have interpreted in terms of molecular interactions. -- Abstract: In this work, densities (ρ) and refractive indices (n) of N-o-tolyl-2-nitrobenzo- and N-o-tolyl-4-nitrobenzo-, hydroxamic acids have been determined for dimethyl sulfoxide (DMSO) as a function of their concentrations at T = (298.15, 303.15, 308.15, 313.15, and 318.15) K. These measurements were carried out to evaluate some important parameters, viz, molar volume (V), apparent molar volume (Vϕ), limiting apparent molar volume (Vϕ0), slope (SV∗), molar refraction (RM) and polarizability (α). The related parameters determined are limiting apparent molar expansivity (ϕE0), thermal expansion coefficient (α2) and the Hepler constant (∂2Vϕ0/∂T2). Excess properties such as excess molar volume (VE), deviations from the additivity rule of refractive index (nE), excess molar refraction (RME) have also been evaluated. The excess properties were fitted to the Redlich–Kister equations to estimate their coefficients and standard deviations were determined. The variations of these excess parameters with composition were discussed from the viewpoint of intermolecular interactions in these solutions. The excess properties are found to be either positive or negative depending on the molecular interactions and the nature of solutions. Further, these parameters have been interpreted in terms of solute

  18. Accurate calculation of diffraction-limited encircled and ensquared energy.

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  19. Genetics Home Reference: aromatase excess syndrome

    ... males, the increased aromatase and subsequent conversion of androgens to estrogen are responsible for the gynecomastia and limited bone growth characteristic of aromatase excess syndrome . Increased estrogen in females can cause symptoms ...

  20. Controlling police (excessive force: The American case

    Zakir Gül

    2013-09-01

    Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.

  1. Romanian welfare state between excess and failure

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  2. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  3. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  4. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  5. Accurate ab initio spin densities

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  6. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  7. Magma chambers: Formation, local stresses, excess pressures, and compartments

    Gudmundsson, Agust

    2012-09-01

    -situ tensile strength of the host rock, 0.5-9 MPa. These in-situ strength estimates are based on hydraulic fracture measurements in drill-holes worldwide down to crustal depths of about 9 km. These measurements do not support some recent magma-chamber stress models that predict (a) extra gravity-related wall-parallel stresses at the boundaries of magma chambers and (b) magma-chamber excess pressures prior to rupture of as much as hundreds of mega-pascals, particularly at great depths. General stress models of magma chambers are of two main types: analytical and numerical. Earlier analytical models were based on a nucleus-of-strain source (a 'point pressure source') for the magma chamber, and have been very useful for rough estimates of magma-chamber depths from surface deformation during unrest periods. More recent models assume the magma chamber to be axisymmetric ellipsoids or, in two-dimensions, ellipses of various shapes. Nearly all these models use the excess pressure in the chamber as the only loading (since lithostatic stress effects are then automatically taken into account), assume the chamber to be totally molten, and predict similar local stress fields. The predicted stress fields are generally in agreement with the world-wide stress measurements in drill-holes and, in particular, with the in-situ tensile-strength estimates. Recent numerical models consider magma-chambers of various (ideal) shapes and sizes in relation to their depths below the Earth's surface. They also take into account crustal heterogeneities and anisotropies; in particular the effects of the effects of a nearby free surface and horizontal and inclined (dipping) mechanical layering. The results show that the free surface may have strong effects on the local stresses if the chamber is comparatively close to the surface. The mechanical layering, however, may have even stronger effects. For realistic layering, and other heterogeneities, the numerical models predict complex local stresses around

  8. Towards an accurate bioimpedance identification

    Sánchez Terrones, Benjamín; Louarroudi, E.; Bragós Bardia, Ramon; Pintelon, Rik

    2013-01-01

    This paper describes the local polynomial method (LPM) for estimating the time- invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identi cation framework and compare it with the traditional cross and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both t...

  9. Towards an accurate bioimpedance identification

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  10. Towards an accurate bioimpedance identification

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  11. ESTIMATING IRRIGATION COSTS

    Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...

  12. New vector bosons and the diphoton excess

    Jorge de Blas

    2016-08-01

    Full Text Available We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely explained in a simplified model containing just φ and W′. On the other hand, if W′ does not carry color, we show that, provided additional colored particles exist to generate the required φ to gluon coupling, the diphoton excess could be explained by the same W′ commonly invoked to explain the diboson excess at ∼2 TeV. We also explore possible connections between the diphoton and diboson excesses with the anomalous tt¯ forward–backward asymmetry.

  13. Antidepressant induced excessive yawning and indifference

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  14. Phenomenology and psychopathology of excessive indoor tanning.

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning. PMID:24601904

  15. Accurate calibration of stereo cameras for machine vision

    Li, Liangfu; Feng, Zuren; Feng, Yuanjing

    2004-01-01

    Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...

  16. Accurate adiabatic correction in the hydrogen molecule

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  17. The Origin of the 24-micron Excess in Red Galaxies

    Brand, Kate; Armus, Lee; Assef, Roberto J; Brown, Michael J I; Cool, Richard R; Desai, Vandana; Dey, Arjun; Floc'h, Emeric Le; Jannuzi, Buell T; Kochanek, Christopher S; Melbourne, Jason; Papovich, Casey J; Soifer, B T

    2008-01-01

    Observations with the Spitzer Space Telescope have revealed a population of red-sequence galaxies with a significant excess in their 24-micron emission compared to what is expected from an old stellar population. We identify 900 red galaxies with 0.150.3 mJy). We determine the prevalence of AGN and star-formation activity in all the AGES galaxies using optical line diagnostics and mid-IR color-color criteria. Using the IRAC color-color diagram from the Spitzer Shallow Survey, we find that 64% of the 24-micron excess red galaxies are likely to have strong PAH emission features in the 8-micron IRAC band. This fraction is significantly larger than the 5% of red galaxies with f24<0.3 mJy that are estimated to have strong PAH emission, suggesting that the infrared emission is largely due to star-formation processes. Only 15% of the 24-micron excess red galaxies have optical line diagnostics characteristic of star-formation (64% are classified as AGN and 21% are unclassifiable). The difference between the optica...

  18. Same-Sign Dilepton Excesses and Vector-like Quarks

    Chen, Chuan-Ren; Low, Ian

    2015-01-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  19. Same-sign dilepton excesses and vector-like quarks

    Chen, Chuan-Ren; Cheng, Hsin-Chia; Low, Ian

    2016-03-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b' quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  20. A More Accurate Fourier Transform

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  1. Singlet Scalar Resonances and the Diphoton Excess

    McDermott, Samuel D; Ramani, Harikrishnan

    2015-01-01

    ATLAS and CMS recently released the first results of searches for diphoton resonances in 13 TeV data, revealing a modest excess at an invariant mass of approximately 750 GeV. We find that it is generically possible that a singlet scalar resonance is the origin of the excess while avoiding all other constraints. We highlight some of the implications of this model and how compatible it is with certain features of the experimental results. In particular, we find that the very large total width of the excess is difficult to explain with loop-level decays alone, pointing to other interesting bounds and signals if this feature of the data persists. Finally we comment on the robust Z-gamma signature that will always accompany the model we investigate.

  2. Minimal Dilaton Model and the Diphoton Excess

    Agarwal, Bakul; Mohan, Kirtimaan A

    2016-01-01

    In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.

  3. Excessive crying in infants with regulatory disorders.

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents. PMID:8742673

  4. Mortality attributable to excess body mass Index in Iran: Implementation of the comparative risk assessment methodology

    Shirin Djalalinia; Sahar Saeedi Moghaddam; Niloofar Peykari; Amir Kasaeian; Ali Sheidaei; Anita Mansouri; Younes Mohammadi; Mahboubeh Parsaeian; Parinaz Mehdipour; Bagher Larijani; Farshad Farzadfar

    2015-01-01

    Background: The prevalence of obesity continues to rise worldwide with alarming rates in most of the world countries. Our aim was to compare the mortality of fatal disease attributable to excess body mass index (BMI) in Iran in 2005 and 2011. Methods: Using standards implementation comparative risk assessment methodology, we estimated mortality attributable to excess BMI in Iranian adults of 25-65 years old, at the national and sub-national levels for 9 attributable outcomes including; is...

  5. Effects of Excess Cash, Board Attributes and Insider Ownership on Firm Value: Evidence from Pakistan

    Nadeem Ahmed Sheikh; Muhammad Imran Khan

    2016-01-01

    The purpose of this paper is to investigate whether excess cash, board attributes (i.e. board size, board independence and CEO duality) and insider ownership affects the value of the firm. Data were taken from annual reports of non-financial firms listed on the Karachi Stock Exchange (KSE) Pakistan during 2008-2012. Pooled ordinary least squares method used to estimate the effects of excess cash and internal governance indicators on the value of the firm. Our results indicate that...

  6. Median ages at stages of sexual maturity and excess weight in school children

    Luciano, Alexandre P; Benedet, Jucemar; de Abreu, Luiz Carlos; Valenti, Vitor E; de Souza Almeida, Fernando; de Vasconcelos, Francisco AG; Adami, Fernando

    2013-01-01

    Background We aimed to estimate the median ages at specific stages of sexual maturity stratified by excess weight in boys and girls. Materials and method This was a cross-sectional study made in 2007 in Florianopolis, Brazil, with 2,339 schoolchildren between 8 to 14 years of age (1,107 boys) selected at random in two steps (by region and type of school). The schoolchildren were divided into: i) those with excess weight and ii) those without excess weight, according to the WHO 2007 cut-off po...

  7. New vector bosons and the diphoton excess

    Jorge de Blas; José Santiago; Roberto Vega-Morales(Laboratoire de Physique Théorique d’Orsay, UMR8627-CNRS, Université Paris-Sud 11, F-91405 Orsay Cedex, France)

    2016-01-01

    We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ) to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely expla...

  8. Diphoton Excess as a Hidden Monopole

    Yamada, Masaki; Yonekura, Kazuya

    2016-01-01

    We provide a theory with a monopole of a strongly-interacting hidden U(1) gauge symmetry that can explain the 750-GeV diphoton excess reported by ATLAS and CMS. The excess results from the resonance of monopole, which is produced via gluon fusion and decays into two photons. In the low energy, there are only mesons and a monopole in our model because any baryons cannot be gauge invariant in terms of strongly interacting Abelian symmetry. This is advantageous of our model because there is no unwanted relics around the BBN epoch.

  9. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  10. Infrared excesses in early-type stars - Gamma Cassiopeiae

    Scargle, J. D.; Erickson, E. F.; Witteborn, F. C.; Strecker, D. W.

    1978-01-01

    Spectrophotometry of the classical Be star Gamma Cas (1-4 microns, with about 2% spectral resolution) is presented. These data, together with existing broad-band observations, are accurately described by simple isothermal LTE models for the IR excess which differ from most previously published work in three ways: (1) hydrogenic bound-free emission is included; (2) the attenuation of the star by the shell is included; and (3) no assumption is made that the shell contribution is negligible in some bandpass. It is demonstrated that the bulk of the IR excess consists of hydrogenic bound-free and free-free emission from a shell of hot ionized hydrogen gas, although a small thermal component cannot be ruled out. The bound-free emission is strong, and the Balmer, Paschen, and Brackett discontinuities are correctly represented by the shell model with physical parameters as follows: a shell temperature of approximately 18,000 K, an optical depth (at 1 micron) of about 0.5, an electron density of approximately 1 trillion per cu cm, and a size of about 2 trillion cm. Phantom shells (i.e., ones which do not alter the observed spectrum of the underlying star) are discussed.

  11. Burden of Growth Hormone Deficiency and Excess in Children.

    Fideleff, Hugo L; Boquete, Hugo R; Suárez, Martha G; Azaretzky, Miriam

    2016-01-01

    Longitudinal growth results from multifactorial and complex processes that take place in the context of different genetic traits and environmental influences. Thus, in view of the difficulties in comprehension of the physiological mechanisms involved in the achievement of normal height, our ability to make a definitive diagnosis of GH impairment still remains limited. There is a myriad of controversial aspects in relation to GH deficiency, mainly related to diagnostic controversies and advances in molecular biology. This might explain the diversity in therapeutic responses and may also serve as a rationale for new "nonclassical" treatment indications for GH. It is necessary to acquire more effective tools to reach an adequate evaluation, particularly while considering the long-term implications of a correct diagnosis, the cost, and safety of treatments. On the other hand, overgrowth constitutes a heterogeneous group of different pathophysiological situations including excessive somatic and visceral growth. There are overlaps in clinical and molecular features among overgrowth syndromes, which constitute the real burden for an accurate diagnosis. In conclusion, both GH deficiency and overgrowth are a great dilemma, still not completely solved. In this chapter, we review the most burdensome aspects related to short stature, GH deficiency, and excess in children, avoiding any details about well-known issues that have been extensively discussed in the literature. PMID:26940390

  12. Excessive prices as abuse of dominance?

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  13. ORIGIN OF EXCESS (176)Hf IN METEORITES

    Thrane, Kristine; Connelly, James Norman; Bizzarro, Martin;

    2010-01-01

    After considerable controversy regarding the (176)Lu decay constant (lambda(176)Lu), there is now widespread agreement that (1.867 +/- 0.008) x 10(-11) yr(-1) as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the (176)Hf excesses that are correlated with...

  14. Low excess air operations of oil boilers

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin [Brookhaven National Labs., Upton, NY (United States)

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  15. 34 CFR 668.166 - Excess cash.

    2010-07-01

    ... funds that an institution receives from the Secretary under the just-in-time payment method. (b) Excess...; and (2) Providing funds to the institution under the reimbursement payment method or cash monitoring payment method described in § 668.163(d) and (e), respectively. (Authority: 20 U.S.C. 1094)...

  16. 43 CFR 426.12 - Excess land.

    2010-10-01

    ... irrigation water by entering into a recordable contract with the United States if the landowner qualifies... irrigation water because the landowner becomes subject to the discretionary provisions as provided in § 426.3... this section; or (iii) The excess land becomes eligible to receive irrigation water as a result...

  17. Software Cost Estimation Review

    Ongere, Alphonce

    2013-01-01

    Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...

  18. 38 CFR 4.46 - Accurate measurement.

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV. PMID:26736434

  20. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every catchment of...

  1. Fast and Provably Accurate Bilateral Filtering.

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  2. Scalar explanation of diphoton excess at LHC

    Han, Huayong; Wang, Shaoming; Zheng, Sibo

    2016-06-01

    Inspired by the diphoton signal excess observed in the latest data of 13 TeV LHC, we consider either a 750 GeV real scalar or pseudo-scalar responsible for this anomaly. We propose a concrete vector-like quark model, in which the vector-like fermion pairs directly couple to this scalar via Yukawa interaction. For this setting the scalar is mainly produced via gluon fusion, then decays at the one-loop level to SM diboson channels gg , γγ , ZZ , WW. We show that for the vector-like fermion pairs with exotic electric charges, such model can account for the diphoton excess and is consistent with the data of 8 TeV LHC simultaneously in the context of perturbative analysis.

  3. Excess plutonium disposition: The deep borehole option

    Ferguson, K.L.

    1994-08-09

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified.

  4. Quark Seesaw Vectorlike Fermions and Diphoton Excess

    Dev, P S Bhupal; Zhang, Yongchao

    2015-01-01

    We present a possible interpretation of the recent diphoton excess reported by the $\\sqrt s=13$ TeV LHC data in quark seesaw left-right models with vectorlike fermions proposed to solve the strong $CP$ problem without the axion. The gauge singlet real scalar field responsible for the mass of the vectorlike fermions has the right production cross section and diphoton branching ratio to be identifiable with the reported excess at around 750 GeV diphoton invariant mass. Various ways to test this hypothesis as more data accumulates at the LHC are proposed. In particular, we find that for our interpretation to work, there is an upper limit on the right-handed scale $v_R$, which depends on the Yukawa coupling of singlet Higgs field to the vectorlike fermions.

  5. Excess plutonium disposition: The deep borehole option

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  6. A scalar hint from the diboson excess?

    Cacciapaglia, Giacomo; Hashimoto, Michio

    2015-01-01

    In view of the recent diboson resonant excesses reported by both ATLAS and CMS, we suggest that a new weak singlet pseudo-scalar particle, $\\eta_{WZ}$, may decay into two weak bosons while being produced in gluon fusion at the LHC. The couplings to gauge bosons can arise as in the case of a Wess-Zumino-Witten anomaly term, thus providing an effective model in which the present observed excess in the diboson channel at the LHC can be studied in a well motivated phenomenological model. In models where the pseudo-scalar arises as a composite state, the coefficients of the anomalous couplings can be related to the fermion components of the underlying dynamics. We provide an example to test the feasibility of the idea.

  7. Excess Higgs Production in Neutralino Decays

    Howe, Kiel; Saraswat, Prashant

    2012-01-01

    The ATLAS and CMS experiments have recently claimed discovery of a Higgs boson-like particle at ~5 sigma confidence and are beginning to test the Standard Model predictions for its production and decay. In a variety of supersymmetric models, a neutralino NLSP can decay dominantly to the Higgs and the LSP. In natural SUSY models, a light third generation squark decaying through this chain can lead to large excess Higgs production while evading existing BSM searches. Such models can be observed...

  8. Excess credit and the South Korean crisis

    Panicos O. Demetriades; Fattouh, Bassam A.

    2006-01-01

    We provide a novel empirical analysis of the South Korean credit market that reveals large volumes of excess credit since the late 1970s, indicating that a sizeable proportion of total credit was being used to refinance unprofitable projects. Our findings are consistent with theoretical literature that suggests that soft budget constraints and over-borrowing were significant factors behind the Korean financial crisis of 1997-98.

  9. IDENTIFICATION OF RIVER WATER EXCESSIVE POLLUTION SOURCES

    K. J. KACHIASHVILI; D. I. MELIKDZHANIAN

    2006-01-01

    The program package for identification of river water excessive pollution sources located between two controlled cross-sections of the river is described in this paper. The software has been developed by the authors on the basis of mathematical models of pollutant transport in the rivers and statistical hypotheses checking methods. The identification algorithms were elaborated with the supposition that the pollution sources discharge different compositions of pollutants or (at the identical c...

  10. Pharmacotherapy of Excessive Sleepiness: Focus on Armodafinil

    Michael Russo

    2009-01-01

    Excessive sleepiness (ES) is responsible for significant morbidity and mortality due to its association with cardiovascular disease, cognitive impairment, and occupational and transport accidents. ES is also detrimental to patients’ quality of life, as it affects work and academic performance, social interactions, and personal relationships. Armodafinil is the R-enantiomer of the established wakefulness-promoting agent modafinil, which is a racemic mixture of both the R- and S-enantiomers. R-...