WorldWideScience

Sample records for accurately estimate excess

  1. Accurate pose estimation for forensic identification

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  2. Accurate estimation of indoor travel times

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...

  3. Excess functions and estimation of the extreme-value index

    Beirlant, Jan; Vynckier, Petra; Teugels, Josef L.

    1996-01-01

    A general class of estimators of the extreme-value index is generated using estimates of mean, median and trimmed excess functions. Special cases yield earlier proposals in the literature, such as Pickands' (1975) estimator. A particular restatement of the mean excess function yields an estimator which can be derived from the slope at the right upper tail from a generalized quantile plot. From this viewpoint algorithms can be constructed to search for the number of extremes needed to minimize...

  4. Accurate estimator of correlations between asynchronous signals

    Toth, Bence; Kertesz, Janos

    2008-01-01

    The estimation of the correlation between time series is often hampered by the asynchronicity of the signals. Cumulating data within a time window suppresses this source of noise but weakens the statistics. We present a method to estimate correlations without applying long time windows. We decompose the correlations of data cumulated over a long window using decay of lagged correlations as calculated from short window data. This increases the accuracy of the estimated correlation significantl...

  5. Accurate hydrocarbon estimates attained with radioactive isotope

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  6. Star Position Estimation Improvements for Accurate Star Tracker Attitude Estimation

    Delabie, Tjorven

    2015-01-01

    This paper presents several methods to improve the estimation of the star positions in a star tracker, using a Kalman Filter. The accuracy with which the star positions can be estimated greatly influences the accuracy of the star tracker attitude estimate. In this paper, a Kalman Filter with low computational complexity, that can be used to estimate the star positions based on star tracker centroiding data and gyroscope data is discussed. The performance of this Kalman Filter can be increased...

  7. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  8. Accurate Parameter Estimation for Unbalanced Three-Phase System

    Yuan Chen; Hing Cheung So

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...

  9. Accurate quantum state estimation via "Keeping the experimentalist honest"

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  10. Efficient and Accurate Robustness Estimation for Large Complex Networks

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  11. Accurate pose estimation using single marker single camera calibration system

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  12. Evaluation of accurate eye corner detection methods for gaze estimation

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  13. Fast and accurate estimation for astrophysical problems in large databases

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  14. Towards SI-traceable radio occultation excess phase processing with integrated uncertainty estimation for climate applications

    Innerkofler, Josef; Pock, Christian; Kirchengast, Gottfried; Schwaerz, Marc; Jaeggi, Adrian; Schwarz, Jakob

    2016-04-01

    The GNSS Radio Occultation (RO) measurement technique is highly valuable for climate monitoring of the atmosphere as it provides accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability. The novel Reference Occultation Processing System (rOPS), currently under development at the WEGC at University of Graz aims to process raw RO measurements into essential climate variables, such as temperature, pressure, and tropospheric water vapor, in a way which is SI-traceable to the universal time standard and which includes rigorous uncertainty propagation. As part of this rOPS climate-quality processing system, accurate atmospheric excess phase profiles with new approaches integrating uncertainty propagation are derived from the raw occultation tracking data and orbit data. Regarding the latter, highly accurate orbit positions and velocities of the GNSS transmitter satellites and the RO receiver satellites in low Earth orbit (LEO) need to be determined, in order to enable high accuracy of the excess phase profiles. Using several representative test days of GPS orbit data from the CODE and IGS archives, which are available at accuracies of about 3 cm (position) / 0.03 mm/s (velocity), and employing Bernese 5.2 and Napeos 3.3.1 software packages for the LEO orbit determination of the CHAMP, GRACE, and MetOp RO satellites, we achieved robust SI-traced LEO orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for the daily orbits, including estimates of systematic uncertainty bounds and of propagated random uncertainties. For COSMIC RO satellites, we found decreased accuracy estimates near 10-15 cm (position) / 0.1-0.15 mm/s (velocity), since the characteristics of the small COSMIC satellite platforms and antennas provide somewhat less favorable orbit determination conditions. We present the setup of how we (I) used the Bernese and Napeos package in mutual

  15. Accurate location estimation of moving object In Wireless Sensor network

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  16. How utilities can achieve more accurate decommissioning cost estimates

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  17. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  18. Using inpainting to construct accurate cut-sky CMB estimators

    Gruetjen, H F; Liguori, M; Shellard, E P S

    2015-01-01

    The direct evaluation of manifestly optimal, cut-sky CMB power spectrum and bispectrum estimators is numerically very costly, due to the presence of inverse-covariance filtering operations. This justifies the investigation of alternative approaches. In this work, we mostly focus on an inpainting algorithm that was introduced in recent CMB analyses to cure cut-sky suboptimalities of bispectrum estimators. First, we show that inpainting can equally be applied to the problem of unbiased estimation of power spectra. We then compare the performance of a novel inpainted CMB temperature power spectrum estimator to the popular apodised pseudo-$C_l$ (PCL) method and demonstrate, both numerically and with analytic arguments, that inpainted power spectrum estimates significantly outperform PCL estimates. Finally, we study the case of cut-sky bispectrum estimators, comparing the performance of three different approaches: inpainting, apodisation and a novel low-l leaning scheme. Providing an analytic argument why the loca...

  19. Analytical estimation of control rod shadowing effect for excess reactivity measurement of HTTR

    The fuel addition method is generally used for the excess reactivity measurement of the initial core. The control rod shadowing effect for the excess reactivity measurement has been estimated analytically for High Temperature Engineering Test Reactor (HTTR). 3-dimensional whole core analyses were carried out. The movements of control rods in measurements were simulated in the calculation. It was made clear that the value of excess reactivity strongly depend on combinations of measuring control rods and compensating control rods. The differences in excess reactivity between combinations come from the control rod shadowing effect. The shadowing effect is reduced by the use of plural number of measuring and compensating control rods to prevent deep insertion of them into the core. The measured excess reactivity in the experiments is, however, smaller than the estimated value with shadowing effect. (author)

  20. Efficient and Accurate Path Cost Estimation Using Trajectory Data

    Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.

    2015-01-01

    Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...

  1. Accurate walking and running speed estimation using wrist inertial data.

    Bertschi, M; Celka, P; Delgado-Gonzalo, R; Lemay, M; Calvo, E M; Grossenbacher, O; Renevey, Ph

    2015-08-01

    In this work, we present an accelerometry-based device for robust running speed estimation integrated into a watch-like device. The estimation is based on inertial data processing, which consists in applying a leg-and-arm dynamic motion model to 3D accelerometer signals. This motion model requires a calibration procedure that can be done either on a known distance or on a constant speed period. The protocol includes walking and running speeds between 1.8km/h and 19.8km/h. Preliminary results based on eleven subjects are characterized by unbiased estimations with 2(nd) and 3(rd) quartiles of the relative error dispersion in the interval ±5%. These results are comparable to accuracies obtained with classical foot pod devices. PMID:26738169

  2. Accurate determination of phase arrival times using autoregressive likelihood estimation

    G. Kvaerna

    1994-01-01

    We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation) of about 0.05 s for P-phases ...

  3. Techniques of HRV accurate estimation using a photoplethysmographic sensor

    Álvarez Gómez, Laura

    2015-01-01

    The student will obtain a database measuring in at least 20 volunteers 50 minutes of two channel ECG, distal pulse measured at the finger and breathing while listening to music. After measurements, the student will develop algorithms to ascertain which is the proper processing of the pulse signal to estimate the heart rate variability when compared to that obtained with the ECG. Breathing effect on errors will be assessed In order to facilitate the study of the heart rate variability (HRV)...

  4. Accurate tempo estimation based on harmonic + noise decomposition

    Bertrand David

    2007-01-01

    Full Text Available We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  5. Accurate determination of phase arrival times using autoregressive likelihood estimation

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  6. Reconciliation of excess 14C-constrained global CO2 piston velocity estimates

    Naegler, Tobias

    2011-01-01

    Oceanic excess radiocarbon data is widely used as a constraint for air–sea gas exchange. However, recent estimates of the global mean piston velocity 〈k〉 from Naegler et al., Krakauer et al., Sweeney et al. and Müller et al. differ substantially despite the fact that they all are based on excess radiocarbon data from the GLODAP data base. Here I show that these estimates of 〈k〉 can be reconciled if first, the changing oceanic radiocarbon inventory due to net uptake of CO2 is taken into accoun...

  7. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  8. Accurate location estimation of moving object with energy constraint & adaptive update algorithms to save data

    Semwal, Vijay Bhaskar; Bhaskar, Vinay S; Sati, Meenakshi

    2011-01-01

    In research paper "Accurate estimation of the target location of object with energy constraint & Adaptive Update Algorithms to Save Data" one of the central issues in sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation system. In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. we are using minimum three sensor node to get the accurate position .We can extend it upto four or five to find more accurate location ...

  9. Reconciliation of excess 14C-constrained global CO2 piston velocity estimates

    Naegler, Tobias

    2009-04-01

    Oceanic excess radiocarbon data is widely used as a constraint for air-sea gas exchange. However, recent estimates of the global mean piston velocity from Naegler et al., Krakauer et al., Sweeney et al. and Müller et al. differ substantially despite the fact that they all are based on excess radiocarbon data from the GLODAP data base. Here I show that these estimates of can be reconciled if first, the changing oceanic radiocarbon inventory due to net uptake of CO2 is taken into account; second, if realistic reconstructions of sea surface Δ14C are used and third, if is consistently reported with or without normalization to a Schmidt number of 660. These corrections applied, unnormalized estimates of from these studies range between 15.1 and 18.2cmh-1. However, none of these estimates can be regarded as the only correct value for . I thus propose to use the `average' of the corrected values of presented here (16.5+/-3.2cmh-1) as the best available estimate of the global mean unnormalized piston velocity , resulting in a gross ocean-to-atmosphere CO2 flux of 76 +/- 15PgCyr-1 for the mid-1990s.

  10. 49 CFR Appendix G to Part 222 - Excess Risk Estimates for Public Highway-Rail Grade Crossings

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Excess Risk Estimates for Public Highway-Rail Grade Crossings G Appendix G to Part 222 Transportation Other Regulations Relating to Transportation... HIGHWAY-RAIL GRADE CROSSINGS Pt. 222, App. G Appendix G to Part 222—Excess Risk Estimates for...

  11. Revised estimates of influenza-associated excess mortality, United States, 1995 through 2005

    Hossain Md Monir

    2008-12-01

    Full Text Available Abstract Background Excess mortality due to seasonal influenza is thought to be substantial. However, influenza may often not be recognized as cause of death. Imputation methods are therefore required to assess the public health impact of influenza. The purpose of this study was to obtain estimates of monthly excess mortality due to influenza that are based on an epidemiologically meaningful model. Methods and Results U.S. monthly all-cause mortality, 1995 through 2005, was hierarchically modeled as Poisson variable with a mean that linearly depends both on seasonal covariates and on influenza-certified mortality. It also allowed for overdispersion to account for extra variation that is not captured by the Poisson error. The coefficient associated with influenza-certified mortality was interpreted as ratio of total influenza mortality to influenza-certified mortality. Separate models were fitted for four age categories ( Conclusion Annual estimates for influenza mortality were highly variable from year to year, but they were systematically lower than previously published estimates. The excellent fit of our model with the data suggest validity of our estimates.

  12. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Saeed Sepasi; Leon R. Roose; Marc M. Matsuura

    2015-01-01

    As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs), hybrid electric vehicles (HEVs) and smart grids. In these applications, the battery management system (BMS) requires an accurate online estimation of the state of charge (SOC) in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes...

  13. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  14. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696

  15. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.103 kg ha-1 yr-1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.

  16. Simple, Fast and Accurate Photometric Estimation of Specific Star Formation Rate

    Stensbo-Smidt, Kristoffer; Igel, Christian; Zirm, Andrew; Pedersen, Kim Steenstrup

    2015-01-01

    Large-scale surveys make huge amounts of photometric data available. Because of the sheer amount of objects, spectral data cannot be obtained for all of them. Therefore it is important to devise techniques for reliably estimating physical properties of objects from photometric information alone. These estimates are needed to automatically identify interesting objects worth a follow-up investigation as well as to produce the required data for a statistical analysis of the space covered by a survey. We argue that machine learning techniques are suitable to compute these estimates accurately and efficiently. This study considers the task of estimating the specific star formation rate (sSFR) of galaxies. It is shown that a nearest neighbours algorithm can produce better sSFR estimates than traditional SED fitting. We show that we can obtain accurate estimates of the sSFR even at high redshifts using only broad-band photometry based on the u, g, r, i and z filters from Sloan Digital Sky Survey (SDSS). We addtional...

  17. ACCURATE LOCATION ESTIMATION OF MOVING OBJECT WITH ENERGY CONSTRAINT & ADAPTIVE UPDATE ALGORITHMS TO SAVE DATA

    Vijay Bhaskar Semwal

    2011-08-01

    Full Text Available In research paper “Accurate estimation of the target location of object with energy constraint &Adaptive Update Algorithms to Save Data” one of the central issues in sensor networks is track thelocation, of moving object which have overhead of saving data, an accurate estimation of the targetlocation of object with energy constraint .We do not have any mechanism which control and maintaindata .The wireless communication bandwidth is also very limited. Some field which is using thistechnique are flood and typhoon detection, forest fire detection, temperature and humidity and ones wehave these information use these information back to a central air conditioning and ventilation system.In this research paper, we propose protocol based on the prediction and adaptive basedalgorithm which is using less sensor node reduced by an accurate estimation of the target location. weare using minimum three sensor node to get the accurate position .We can extend it upto four or five tofind more accurate location but we have energy constraint so we are using three with accurateestimation of location help us to reduce sensor node..We show that our tracking method performs well interms of energy saving regardless of mobility pattern of the mobile target .We extends the life time ofnetwork with less sensor node. Once a new object is detected, a mobile agent will be initiated to track theroaming path of the object. The agent is mobile since it will choose the sensor closest to the object tostay. The agent may invite some nearby slave sensors to cooperatively position the object and inhibitother irrelevant (i.e., farther sensors from tracking the object. As a result, the communication andsensing overheads are greatly reduced.

  18. Accurate DOA Estimations Using Microstrip Adaptive Arrays in the Presence of Mutual Coupling Effect

    Qiulin Huang

    2013-01-01

    Full Text Available A new mutual coupling calibration method is proposed for adaptive antenna arrays and is employed in the DOA estimations to calibrate the received signals. The new method is developed via the transformation between the embedded element patterns and the isolated element patterns. The new method is characterized by the wide adaptability of element structures such as dipole arrays and microstrip arrays. Additionally, the new method is suitable not only for the linear polarization but also for the circular polarization. It is shown that accurate calibration of the mutual coupling can be obtained for the incident signals in the 3 dB beam width and the wider angle range, and, consequently, accurate [1D] and [2D] DOA estimations can be obtained. Effectiveness of the new calibration method is verified by a linearly polarized microstrip ULA, a circularly polarized microstrip ULA, and a circularly polarized microstrip UCA.

  19. Accurate state and parameter estimation in nonlinear systems with sparse observations

    Rey, Daniel; Eldridge, Michael; Kostuk, Mark [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Abarbanel, Henry D.I., E-mail: habarbanel@ucsd.edu [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Marine Physical Laboratory, Scripps Institution of Oceanography, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0374 (United States); Schumann-Bischoff, Jan [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany); Parlitz, Ulrich, E-mail: ulrich.parlitz@ds.mpg.de [Max Planck Institute for Dynamics and Self-Organization, Am Faßberg 17, 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg-August-Universität Göttingen, Am Faßberg 17, 37077 Göttingen (Germany)

    2014-02-01

    Transferring information from observations to models of complex systems may meet impediments when the number of observations at any observation time is not sufficient. This is especially so when chaotic behavior is expressed. We show how to use time-delay embedding, familiar from nonlinear dynamics, to provide the information required to obtain accurate state and parameter estimates. Good estimates of parameters and unobserved states are necessary for good predictions of the future state of a model system. This method may be critical in allowing the understanding of prediction in complex systems as varied as nervous systems and weather prediction where insufficient measurements are typical.

  20. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  1. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  2. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that they...... employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...... that are aimed at solving this problem. These esti- mators are based on the principles of nonlinear least-squares, harmonic fitting, optimal filtering, subspace orthogonality, and shift-invariance, and they all reduce to already published methods for a high number of observations. In experiments, the...

  3. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  4. Extended Kalman Filter with a Fuzzy Method for Accurate Battery Pack State of Charge Estimation

    Saeed Sepasi

    2015-06-01

    Full Text Available As the world moves toward greenhouse gas reduction, there is increasingly active work around Li-ion chemistry-based batteries as an energy source for electric vehicles (EVs, hybrid electric vehicles (HEVs and smart grids. In these applications, the battery management system (BMS requires an accurate online estimation of the state of charge (SOC in a battery pack. This estimation is difficult, especially after substantial battery aging. In order to address this problem, this paper utilizes SOC estimation of Li-ion battery packs using a fuzzy-improved extended Kalman filter (fuzzy-IEKF for Li-ion cells, regardless of their age. The proposed approach introduces a fuzzy method with a new class and associated membership function that determines an approximate initial value applied to SOC estimation. Subsequently, the EKF method is used by considering the single unit model for the battery pack to estimate the SOC for following periods of battery use. This approach uses an adaptive model algorithm to update the model for each single cell in the battery pack. To verify the accuracy of the estimation method, tests are done on a LiFePO4 aged battery pack consisting of 120 cells connected in series with a nominal voltage of 432 V.

  5. An Accurate Method for the BDS Receiver DCB Estimation in a Regional Network

    LI Xin

    2016-08-01

    Full Text Available An accurate approach for receiver differential code biases (DCB estimation is proposed with the BDS data obtained from a regional tracking network. In contrast to the conventional methods for BDS receiver DCB estimation, the proposed method does not require a complicated ionosphere model, as long as one reference station receiver DCB is known. The main idea for this method is that the ionosphere delay is highly dependent on the geometric ranges between the BDS satellite and the receiver normally. Therefore, the non-reference station receivers DCBs in this regional area can be estimated using single difference (SD with reference stations. The numerical results show that the RMS of these estimated BDS receivers DCBs errors over 30 days are about 0.3 ns. Additionally, after deduction of these estimated receivers DCBs and knowing satellites DCBs, the extractive diurnal VTEC showed a good agreement with the diurnal VTEC gained from the GIM interpolation, indicating the reliability of the estimated receivers DCBs.

  6. Estimate of lifetime excess lung cancer risk due to indoor exposure to natural radon-222 daughters in Korea

    Lifetime excess lung cancer risk due to indoor 222Rn daughters exposure in Korea was quantitatively estimated by a modified relative risk projection model proposed by the U.S. National Academy of Science and the recent Korean life table data. The lifetime excess risk of lung cancer death attributable to annual constant exposure to Korean indoor radon daughters was estimated to be about 230/106 per WLM, which seemed to be nearly in the median of the range of 150-450/106 per WLM reported by the UNSCEAR in 1988. (1 fig., 2 tabs.)

  7. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  10. Shear-wave elastography contributes to accurate tumour size estimation when assessing small breast cancers

    Aim: To assess whether the size of peritumoural stiffness (PTS) on shear-wave elastography (SWE) for small primary breast cancers (≤15 mm) was associated with size discrepancies between grey-scale ultrasound (GSUS) and final histological size and whether the addition of PTS size to GSUS size might result in more accurate tumour size estimation when compared to final histological size. Materials and methods: A retrospective analysis of 86 consecutive patients between August 2011 and February 2013 who underwent breast-conserving surgery for tumours of size ≤15 mm at ultrasound was carried out. The size of PTS stiffness was compared to mean GSUS size, mean histological size, and the extent of size discrepancy between GSUS and histology. PTS size and GSUS were combined and compared to the final histological size. Results: PTS of >3 mm was associated with a larger mean final histological size (16 versus 11.3 mm, p < 0.001). PTS size of >3 mm was associated with a higher frequency of underestimation of final histological size by GSUS of >5 mm (63% versus 18%, p < 0.001). The combination of PTS and GSUS size led to accurate estimation of the final histological size (p = 0.03). The size of PTS was not associated with margin involvement (p = 0.27). Conclusion: PTS extending beyond 3 mm from the grey-scale abnormality is significantly associated with underestimation of tumour size of >5 mm for small invasive breast cancers. Taking into account the size of PTS also led to accurate estimation of the final histological size. Further studies are required to assess the relationship of the extent of SWE stiffness and margin status. - Highlights: • Peritumoural stiffness of greater than 3 mm was associated with larger tumour size. • Underestimation of tumour size by ultrasound was associated with peri-tumoural stiffness size. • Combining peri-tumoural stiffness size to ultrasound produced accurate tumour size estimation

  11. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  12. Accurate satellite-derived estimates of the tropospheric ozone impact on the global radiation budget

    J. Joiner

    2009-07-01

    Full Text Available Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the radiative effect of tropospheric O3 for January and July 2005. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our derived radiative effect reflects the unadjusted (instantaneous effect of the total tropospheric O3 rather than the anthropogenic component. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS Aura Ozone Monitoring Instrument (OMI and Microwave Limb Sounder (MLS by incorporating cloud pressure information from OMI. We focus specifically on the magnitude and spatial structure of the cloud effect on both the short- and long-wave radiative budget. The estimates presented here can be used to evaluate the various aspects of model-generated radiative forcing. For example, our derived cloud impact is to reduce the radiative effect of tropospheric ozone by ~16%. This is centered within the published range of model-produced cloud effect on unadjusted ozone radiative forcing.

  13. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  14. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  15. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  16. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  17. Accurate estimation of motion blur parameters in noisy remote sensing image

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  18. Accurate estimation of electromagnetic parameters using FEA for Indus-2 RF cavity

    The 2.5 GeV INDUS-2 SRS has four normal conducting bell shaped RF cavities operating at 505.8122 MHz fundamental frequency and peak cavity voltage of 650 kV. The RF frequency and other parameters of the cavity need to be estimated accurately for beam dynamics and cavity electromagnetic studies at fundamental as well as higher frequencies (HOMs). The 2D axis-symmetric result of the fundamental frequency with fine discretization using SUPERFISH code has shown a difference of ∼2.5 MHz from the designed frequency, which leads to inaccurate estimation of electromagnetic parameters. Therefore, for accuracy, a complete 3D model comprising of all ports needs to be considered in the RF domain with correct boundary conditions. All ports in the cavity were modeled using FEA tool ANSYS. The Eigen mode simulation of complete cavity model by considering various ports is used for estimation of parameters. A mesh convergence test for these models is performed. The methodologies adopted for calculating different electromagnetic parameters using small macros are described. Various parameters that affect the cavity frequency are discussed in this paper. A comparison of FEA (ANSYS) results is done with available experimental measurements. (author)

  19. Accurate estimation of the RMS emittance from single current amplifier data

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H- ion source

  20. Validation of a wrist monitor for accurate estimation of RR intervals during sleep.

    Renevey, Ph; Sola, J; Theurillat, P; Bertschi, M; Krauss, J; Andries, D; Sartori, C

    2013-01-01

    While the incidence of sleep disorders is continuously increasing in western societies, there is a clear demand for technologies to asses sleep-related parameters in ambulatory scenarios. The present study introduces a novel concept of accurate sensor to measure RR intervals via the analysis of photo-plethysmographic signals recorded at the wrist. In a cohort of 26 subjects undergoing full night polysomnography, the wrist device provided RR interval estimates in agreement with RR intervals as measured from standard electrocardiographic time series. The study showed an overall agreement between both approaches of 0.05 ± 18 ms. The novel wrist sensor opens the door towards a new generation of comfortable and easy-to-use sleep monitors. PMID:24110980

  1. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  2. Fast and Accurate Video PQoS Estimation over Wireless Networks

    Emanuele Viterbo

    2008-06-01

    Full Text Available This paper proposes a curve fitting technique for fast and accurate estimation of the perceived quality of streaming media contents, delivered within a wireless network. The model accounts for the effects of various network parameters such as congestion, radio link power, and video transmission bit rate. The evaluation of the perceived quality of service (PQoS is based on the well-known VQM objective metric, a powerful technique which is highly correlated to the more expensive and time consuming subjective metrics. Currently, PQoS is used only for offline analysis after delivery of the entire video content. Thanks to the proposed simple model, we can estimate in real time the video PQoS and we can rapidly adapt the content transmission through scalable video coding and bit rates in order to offer the best perceived quality to the end users. The designed model has been validated through many different measurements in realistic wireless environments using an ad hoc WiFi test bed.

  3. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  4. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  5. Estimation of energy conservation benefits in excess air controlled gas-fired systems

    Bahadori, Alireza; Vuthaluru, Hari B. [School of Chemical and Petroleum Engineering, Curtin University of Technology, GPO Box 1987, Perth, Western Australia 6845 (Australia)

    2010-10-15

    The most significant energy consumers in energy related industries are boilers and other gas-fired systems. Combustion efficiency term commonly used for boilers and other fired systems and the information on either carbon dioxide (CO{sub 2}) or oxygen (O{sub 2}) in the exhaust gas can be used. The aim of this study is to develop a simple-to-use predictive tool which is easier than the existing approaches less complicated with fewer computations and suitable for combustion engineers for predicting the natural gas combustion efficiency as a function of excess air fraction and stack temperature rise (the difference between the flue gas temperature and the combustion air inlet temperature). The results of the proposed predictive tool can be used in follow-up calculations to determine relative operating efficiency and to establish energy conservation benefits for an excess air control program. Results show that the proposed predictive tool has a very good agreement with the reported data where the average absolute deviation percent is 0.1%. It should be noted that these calculations are based on assuming complete natural gas combustion at atmospheric pressure and the level of unburned combustibles is considered negligible. The proposed method is superior owing to its accuracy and clear numerical background, wherein the relevant coefficients can be retuned quickly for various cases. This proposed simple-to-use approach can be of immense practical value for the engineers and scientists to have a quick check on natural gas combustion efficiencies for wide range of operating conditions without the necessity of any pilot plant set up and experimental trials. In particular, process and combustion engineers would find the proposed approach to be user friendly involving transparent calculations with no complex expressions for their applications to the design and operation of natural gas-fired systems such as furnaces and boilers. (author)

  6. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    Eliane Lopes Rosado

    2014-03-01

    Full Text Available Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE, compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect calorimetry with respiratory hood. Results: In G1 and G2, it was found that the estimates obtained by Harris-Benedict, Shofield, FAO/WHO/ ONU and Henry & Rees did not differ from EE using indirect calorimetry, which presented higher values than the equations proposed by Owen, Mifflin-St Jeor and Oxford. For G1 and G2 the predictive equation closest to the value obtained by the indirect calorimetry was the FAO/WHO/ONU (7.9% and 0.46% underestimation, respectively, followed by Harris-Benedict (8.6% and 1.5% underestimation, respectively. Conclusion: The equations proposed by FAO/WHO/ ONU, Harris-Benedict, Shofield and Henry & Rees were adequate to estimate the EE in a sample of Brazilian and Spanish women with excess body weight. The other equations underestimated the EE.

  7. Rapid estimation of excess mortality: nowcasting during the heatwave alert in England and Wales in June 2011

    Helen K. Green; Andrews, Nick J; Bickler, Graham; Pebody, Richard G.

    2012-01-01

    Background A Heat-Health Watch system has been established in England and Wales since 2004 as part of the national heatwave plan following the 2003 European-wide heatwave. One important element of this plan has been the development of a timely mortality surveillance system. This article reports the findings and timeliness of a daily mortality model used to ‘nowcast’ excess mortality (utilising incomplete surveillance data to estimate the number of deaths in near-real time) during a heatwave a...

  8. Self-estimation of Body Fat is More Accurate in College-age Males Compared to Females

    HANCOCK, HALLEY L.; Jung, Alan P.; Petrella, John K.

    2012-01-01

    The objective was to determine the effect of gender on the ability to accurately estimate one’s own body fat percentage. Fifty-five college-age males and 99 college-age females participated. Participants estimated their own body fat percent before having their body composition measured using a BOD POD. Participants also completed a modified Social Physique Anxiety Scale (SPAS). Estimated body fat was significantly lower compared to measured body fat percent in females (26.8±5.6% vs. 30.2±7.0%...

  9. Technical note: tree truthing: how accurate are substrate estimates in primate field studies?

    Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J

    2012-04-01

    Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099

  10. Accurate single-observer passive coherent location estimation based on TDOA and DOA

    Li Jing

    2014-08-01

    Full Text Available This paper investigates the problem of target position estimation with a single-observer passive coherent location (PCL system. An approach that combines angle with time difference of arrival (ATDOA is used to estimate the location of a target. Compared with the TDOA-only method which needs two steps, the proposed method estimates the target position more directly. The constrained total least squares (CTLS technique is applied in this approach. It achieves the Cramer–Rao lower bound (CRLB when the parameter measurements are subject to small Gaussian-distributed errors. Performance analysis and the CRLB of this approach are also studied. Theory verifies that the ATDOA method gets a lower CRLB than the TDOA-only method with the same TDOA measuring error. It can also be seen that the position of the target affects estimating precision. At the same time, the locations of transmitters affect the precision and its gradient direction. Compared with the TDOA, the ATDOA method can obtain more precise target position estimation. Furthermore, the proposed method accomplishes target position estimation with a single transmitter, while the TDOA-only method needs at least four transmitters to get the target position. Furthermore, the transmitters’ position errors also affect precision of estimation regularly.

  11. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer;

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the...... experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median...

  12. Accurate performance estimators for information retrieval based on span bound of support vector machines

    2006-01-01

    Support vector machines have met with significant success in the information retrieval field, especially in handling text classification tasks. Although various performance estimators for SVMs have been proposed,these only focus on accuracy which is based on the leave-one-out cross validation procedure. Information-retrieval-related performance measures are always neglected in a kernel learning methodology. In this paper, we have proposed a set of information-retrieval-oriented performance estimators for SVMs, which are based on the span bound of the leave-one-out procedure. Experiments have proven that our proposed estimators are both effective and stable.

  13. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  14. A Simple but Accurate Estimation of Residual Energy for Reliable WSN Applications

    Jae Ung Kim; Min Jae Kang; Jun Min Yi; Dong Kun Noh

    2015-01-01

    A number of studies have been actively conducted to address limited energy resources in sensor systems over wireless sensor networks. Most of these studies are based on energy-aware schemes, which take advantage of the residual energy from the sensor system’s own or neighboring nodes. However, existing sensor systems estimate residual energy based solely on voltage and current consumption, leading to inaccurate estimations because the residual energy in real batteries is affected by temperatu...

  15. Palirria : Accurate on-line parallelism estimation for adaptive work-stealing

    Varisteas, Georgios; Brorsson, Mats

    2014-01-01

    We present Palirria, a self-adapting work-stealing scheduling method for nested fork/join parallelism that can be used to estimate the number of utilizable workers and self-adapt accordingly. The estimation mechanism is optimized for accuracy, minimizing the requested resources without degrading performance. We implemented Palirria for both the Linux and Barrelfish operating systems and evaluated it on two platforms: a 48-core NUMA multiprocessor and a simulated 32-core system. Compared to st...

  16. Accurate and robust phylogeny estimation based on profile distances: a study of the Chlorophyceae (Chlorophyta

    Rahmann Sven

    2004-06-01

    Full Text Available Abstract Background In phylogenetic analysis we face the problem that several subclade topologies are known or easily inferred and well supported by bootstrap analysis, but basal branching patterns cannot be unambiguously estimated by the usual methods (maximum parsimony (MP, neighbor-joining (NJ, or maximum likelihood (ML, nor are they well supported. We represent each subclade by a sequence profile and estimate evolutionary distances between profiles to obtain a matrix of distances between subclades. Results Our estimator of profile distances generalizes the maximum likelihood estimator of sequence distances. The basal branching pattern can be estimated by any distance-based method, such as neighbor-joining. Our method (profile neighbor-joining, PNJ then inherits the accuracy and robustness of profiles and the time efficiency of neighbor-joining. Conclusions Phylogenetic analysis of Chlorophyceae with traditional methods (MP, NJ, ML and MrBayes reveals seven well supported subclades, but the methods disagree on the basal branching pattern. The tree reconstructed by our method is better supported and can be confirmed by known morphological characters. Moreover the accuracy is significantly improved as shown by parametric bootstrap.

  17. A best-estimate plus uncertainty type analysis for computing accurate critical channel power uncertainties

    This paper provides a Critical Channel Power (CCP) uncertainty analysis methodology based on a Monte-Carlo approach. This Monte-Carlo method includes the identification of the sources of uncertainty and the development of error models for the characterization of epistemic and aleatory uncertainties associated with the CCP parameter. Furthermore, the proposed method facilitates a means to use actual operational data leading to improvements over traditional methods (e.g., sensitivity analysis) which assume parametric models that may not accurately capture the possible complex statistical structures in the system input and responses. (author)

  18. Sensor Fusion for Accurate Ego-Motion Estimation in a Moving Platform

    Chuho Yi; Jungwon Cho

    2015-01-01

    With the coming of “Internet of things” (IoT) technology, many studies have sought to apply IoT to mobile platforms, such as smartphones, robots, and moving vehicles. An estimation of ego-motion in a moving platform is an essential and important method to build a map and to understand the surrounding environment. In this paper, we describe an ego-motion estimation method using a vision sensor that is widely used in IoT systems. Then, we propose a new fusion method to improve the accuracy of m...

  19. Accurate estimation of dose distributions inside an eye irradiated with 106Ru plaques

    Background: Irradiation of intraocular tumors requires dedicated techniques, such as brachytherapy with 106Ru plaques. The currently available treatment planning system relies on the assumption that the eye is a homogeneous water sphere and on simplified radiation transport physics. However, accurate dose distributions and their assessment demand better models for both the eye and the physics. Methods: The Monte Carlo code PENELOPE, conveniently adapted to simulate the beta decay of 106Ru over 106Rh into 106Pd, was used to simulate radiation transport based on a computerized tomography scan of a patient's eye. A detailed geometrical description of two plaques (models CCA and CCB) from the manufacturer BEBIG was embedded in the computerized tomography scan. Results: The simulations were firstly validated by comparison with experimental results in a water phantom. Dose maps were computed for three plaque locations on the eyeball. From these maps, isodose curves and cumulative dose-volume histograms in the eye and for the structures at risk were assessed. For example, it was observed that a 4-mm anterior displacement with respect to a posterior placement of a CCA plaque for treating a posterior tumor would reduce from 40 to 0% the volume of the optic disc receiving more than 80 Gy. Such a small difference in anatomical position leads to a change in the dose that is crucial for side effects, especially with respect to visual acuity. The radiation oncologist has to bring these large changes in absorbed dose in the structures at risk to the attention of the surgeon, especially when the plaque has to be positioned close to relevant tissues. Conclusion: The detailed geometry of an eye plaque in computerized and segmented tomography of a realistic patient phantom was simulated accurately. Dose-volume histograms for relevant anatomical structures of the eye and the orbit were obtained with unprecedented accuracy. This represents an important step toward an optimized

  20. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  1. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  2. A novel method based on two cameras for accurate estimation of arterial oxygen saturation

    Liu, He; Ivanov, Kamen; Wang, Yadong; Wang, Lei

    2015-01-01

    Background Photoplethysmographic imaging (PPGi) that is based on camera allows acquiring photoplethysmogram and measuring physiological parameters such as pulse rate, respiration rate and perfusion level. It has also shown potential for estimation of arterial oxygen saturation (SaO2). However, there are some technical limitations such as optical shunting, different camera sensitivity to different light spectra, different AC-to-DC ratios (the peak-to-peak amplitude to baseline ratio) of the PP...

  3. New Method for Accurate Parameter Estimation of Induction Motors Based on Artificial Bee Colony Algorithm

    Jamadi, Mohammad; Merrikh-Bayat, Farshad

    2014-01-01

    This paper proposes an effective method for estimating the parameters of double-cage induction motors by using Artificial Bee Colony (ABC) algorithm. For this purpose the unknown parameters in the electrical model of asynchronous machine are calculated such that the sum of the square of differences between full load torques, starting torques, maximum torques, starting currents, full load currents, and nominal power factors obtained from model and provided by manufacturer is minimized. In orde...

  4. Plant DNA barcodes can accurately estimate species richness in poorly known floras.

    Craig Costion

    Full Text Available BACKGROUND: Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70% and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. METHODOLOGY/PRINCIPAL FINDINGS: Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. CONCLUSIONS/SIGNIFICANCE: We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways.

  5. Application of apriori error estimates for Navier-Stokes equations to accurate finite element solution

    Burda, P.; Novotný, Jaroslav; Šístek, J.

    Tenerife : WSEAS, 2005 - (Thome, N.; Gonzalez, C.; Altin, A.), s. 31-36 ISBN 960-8457-39-4. [WSEAS International Conference on Applied Mathematics /8./. Tenerife (ES), 16.12.2005-18.12.2005] R&D Projects: GA AV ČR(CZ) IAA2120201 Institutional research plan: CEZ:AV0Z20760514 Keywords : apriori estimates * Navier-Stokes equations * finite elements Subject RIV: BA - General Mathematics

  6. An application of apriori and aposteriori error estimates to accurate FEM solution of incompressible flows

    Burda, P.; Novotný, Jaroslav; Šístek, J.

    Plzeň : Západočeská univerzita v Plzni, 2006 - (Marek, I.), s. 7-30 ISBN 80-7043-426-0. [Software and algorithms of numerical mathematics. Srní, Šumava (CZ), 12.09.2006-16.09.2006] R&D Projects: GA AV ČR 1ET400760509 Institutional research plan: CEZ:AV0Z20760514 Keywords : apriori and aposteriori error estimates * finite element method * incompressible flows Subject RIV: BK - Fluid Dynamics

  7. Accurate covariance estimation of galaxy-galaxy weak lensing: limitations of jackknife covariance

    Shirasaki, Masato; Miyatake, Hironao; Takahashi, Ryuichi; Hamana, Takashi; Nishimichi, Takahiro; Murata, Ryoma

    2016-01-01

    We develop a method to simulate galaxy-galaxy weak lensing by utilizing all-sky, light-cone simulations. We populate a real catalog of source galaxies into a light-cone simulation realization, simulate the lensing effect on each galaxy, and then identify lensing halos that are considered to host galaxies or clusters of interest. We use the mock catalog to study the error covariance matrix of galaxy-galaxy weak lensing and find that the super-sample covariance (SSC), which arises from density fluctuations with length scales comparable with or greater than a size of survey area, gives a dominant source of the sample variance. We then compare the full covariance with the jackknife (JK) covariance, the method that estimates the covariance from the resamples of the data itself. We show that, although the JK method gives an unbiased estimator of the covariance in the shot noise or Gaussian regime, it always over-estimates the true covariance in the sample variance regime, because the JK covariance turns out to be a...

  8. A Prototype-Based Gate-Level Cycle-Accurate Methodology for SoC Performance Exploration and Estimation

    Ching-Lung Su

    2013-01-01

    Full Text Available A prototype-based SoC performance estimation methodology was proposed for consumer electronics design. Traditionally, prototypes are usually used in system verification before SoC tapeout, which is without accurate SoC performance exploration and estimation. This paper attempted to carefully model the SoC prototype as a performance estimator and explore the environment of SoC performance. The prototype met the gate-level cycle-accurate requirement, which covered the effect of embedded processor, on-chip bus structure, IP design, embedded OS, GUI systems, and application programs. The prototype configuration, chip post-layout simulation result, and the measured parameters of SoC prototypes were merged to model a target SoC design. The system performance was examined according to the proposed estimation models, the profiling result of the application programs ported on prototypes, and the timing parameters from the post-layout simulation of the target SoC. The experimental result showed that the proposed method was accompanied with only an average of 2.08% of error for an MPEG-4 decoder SoC at simple profile level 2 specifications.

  9. What is legally defensible and scientifically most accurate approach for estimating the intake of radionuclides in workers?

    Merits and demerits of each technique utilized for determining the intakes of radioactive materials in workers is described with particular emphasis on the intake of thorium, uranium, and plutonium. Air monitoring at work places have certain flaws, which may give erroneous estimates of intake of the radionuclides. Bioassay techniques involve radiochemical determinations of radionuclides in biological samples such as urine, feces etc, and employing biokinetic models to estimate the intake from such measurements. Though, highly sensitive and accurate procedures are available for the determination of these radionuclides biokinetic models employed produce large errors in the estimate. In vivo measurements have fundamental problems of poor sensitivities. Also, due to non-availability of such facilities at most of the nuclear sites transporting workers at different facilities may cost a lot of financial resources. It seems difficult to defend in the court of law that determination of intake of radioactive material in workers by an individual procedure is accurate; at the best these techniques may be employed to obtain only an estimate of intake. (author)

  10. Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts

    Naruse, Tomohiro; Shibutani, Yoji

    Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.

  11. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  12. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  13. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  14. ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples

    Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar

    2011-01-01

    Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173

  15. Evaluation of the goodness of fit of new statistical size distributions with consideration of accurate income inequality estimation

    Masato Okamoto

    2012-01-01

    This paper compares the goodness-of-fit of two new types of parametric income distribution models (PIDMs), the kappa-generalized (kG) and double-Pareto lognormal (dPLN) distributions, with that of beta-type PIDMs using US and Italian data for the 2000s. A three-parameter model kG tends to estimate the Lorenz curve and income inequality indices more accurately when the likelihood value is similar to that of the beta-type PIDMs. For the first half of the 2000s in the USA, the kG outperforms the...

  16. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  17. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  18. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  19. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  20. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  1. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  2. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  3. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  4. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    M. Montes-Hugo

    2014-06-01

    Full Text Available The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor, EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM for estimating the phytoplankton absorption coefficient at 443 nm (aph(443 and the chlorophyll concentration (chl in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443 estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013. A change on the inversion approach used for estimating aph(443 values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System default values for the optical cross section of phytoplankton (i.e., aph(443 = aph(443/chl = 0.056 m2mg−1, the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443 retrievals and with respect to in situ determinations increased up to 29%.

  5. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  6. The absolute lymphocyte count accurately estimates CD4 counts in HIV-infected adults with virologic suppression and immune reconstitution

    Barnaby Young

    2014-11-01

    Full Text Available Introduction: The clinical value of monitoring CD4 counts in immune reconstituted, virologically suppressed HIV-infected patients is limited. We investigated if absolute lymphocyte counts (ALC from an automated blood counting machine could accurately estimate CD4 counts. Materials and Methods: CD4 counts, ALC and HIV viral load (VL were extracted from an electronic laboratory database for all patients in HIV care at the Communicable Diseases Centre, Tan Tock Seng Hospital, Singapore (2008–13. Virologic suppression was defined as consecutive HIV VLs 300 cells/mm3. CD4 counts were estimated using the CD4% from the first value >300 and an ALC 181–540 days later. Results: A total of 1215 periods of virologic suppression were identified from 1183 patients, with 2227 paired CD4-ALCs available for analysis. 98.3% of CD4 estimates were within 50% of the actual value. 83.3% within 25% and 40.5% within 10%. The error pattern was approximately symmetrically distributed around a mean of −6.5%, but significant peaked and with mild positive skew (kurtosis 4.45, skewness 1.07. Causes for these errors were explored. Variability between lymphocyte counts measured by ALC and flow cytometry did not follow an apparent pattern, and contributed to 32% of the total error (median absolute error 5.5%, IQR 2.6–9.3. The CD4% estimate was significantly lower than the actual value (t-test, p<0.0001. The magnitude of this difference was greater for lower values, and above 25%, there was no significant difference. Precision of the CD4 estimate was similar as baseline CD4% increased, however accuracy improved significantly: from a median 16% underestimation to 0% as baseline CD4% increased from 12 to 30. Above a CD4% baseline of 25, estimates of CD4 were within 25% of the actual value 90.2% of the time with a median 2% underestimation. A robust (bisqaure linear regression model was developed to correct for the rise in CD4% with time, when baseline was 14–24

  7. Accurate and rapid error estimation on global gravitational field from current GRACE and future GRACE Follow-On missions

    Firstly, the new combined error model of cumulative geoid height influenced by four error sources, including the inter-satellite range-rate of an interferometric laser (K-band) ranging system, the orbital position and velocity of a global positioning system (GPS) receiver and non-conservative force of an accelerometer, is established from the perspectives of the power spectrum principle in physics using the semi-analytical approach. Secondly, the accuracy of the global gravitational field is accurately and rapidly estimated based on the combined error model; the cumulative geoid height error is 1.985 × 10−1 m at degree 120 based on GRACE Level 1B measured observation errors of the year 2007 published by the US Jet Propulsion Laboratory (JPL), and the cumulative geoid height error is 5.825 × 10−2 m at degree 360 using GRACE Follow-On orbital altitude 250 km and inter-satellite range 50 km. The matching relationship of accuracy indexes from GRACE Follow-On key payloads is brought forward, and the dependability of the combined error model is validated. Finally, the feasibility of high-accuracy and high-resolution global gravitational field estimation from GRACE Follow-On is demonstrated based on different satellite orbital altitudes. (geophysics, astronomy and astrophysics)

  8. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus?

    Corina J. Logan

    2015-06-01

    Full Text Available There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size, as measured by computerized tomography (CT scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex.

  9. A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates.

    Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien

    2016-08-15

    The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26924122

  10. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  11. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  12. Excessive Daytime Sleepiness

    Yavuz Selvi

    2016-06-01

    Full Text Available Excessive daytime sleepiness is one of the most common sleep-related patient symptoms, with preva-lence in the community estimated to be as high as 18%. Patients with excessive daytime sleepiness may exhibit life threatening road and work accidents, social maladjustment, decreased academic and occupational performance and have poorer health than comparable adults. Thus, excessive daytime sleepiness is a serious condition that requires investigation, diagnosis and treatment primarily. As with most medical condition, evaluation of excessive daytime sleepiness begins a precise history and various objective and subjective tools have been also developed to assess excessive daytime sleepiness. The most common causes of excessive daytime sleepiness are insufficient sleep hygiene, chronic sleep deprivation, medical and psychiatric conditions and sleep disorders, such as obstructive sleep apnea, medications, and narcolepsy. Treatment option should address underlying contributors and promote sleep quantity by ensuring good sleep hygiene. [Psikiyatride Guncel Yaklasimlar - Current Approaches in Psychiatry 2016; 8(2: 114-132

  13. Estimation of excess mortality due to long-term exposure to PM2.5 in Japan using a high-resolution model for present and future scenarios

    Goto, Daisuke; Ueda, Kayo; Ng, Chris Fook Sheng; Takami, Akinori; Ariga, Toshinori; Matsuhashi, Keisuke; Nakajima, Teruyuki

    2016-09-01

    Particulate matter with a diameter of less than 2.5 μm, known as PM2.5, can affect human health, especially in elderly people. Because of the imminent aging of society in the near future in most developed countries, the human health impacts of PM2.5 must be evaluated. In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a high-resolution stretched grid system (∼10 km for the high-resolution model, HRM) for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). We also used the same model with a low-resolution uniform grid system (∼100 km for the low-resolution model, LRM). These calculations were conducted by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 among the elderly (over 65 years old) based on different minimum PM2.5 concentration (MINPM) levels to account for uncertainty using the simulated PM2.5 distributions to express the health effect as a concentration-response function. As a result, we estimated the excess mortality for all of Japan to be 31,300 (95% confidence intervals: 20,700 to 42,600) people in 2000 and 28,600 (95% confidence intervals: 19,000 to 38,700) people in 2030 using the HRM with a MINPM of 5.8 μg/m3. In contrast, the LRM resulted in underestimates of approximately 30% (for PM2.5 concentrations in the 2000 and 2030), approximately 60% (excess mortality in the 2000) and approximately 90% (excess mortality in 2030) compared to the HRM results. We also found that the uncertainty in the MINPM value, especially for low PM2.5 concentrations in the future (2030) can cause large variability in the estimates, ranging from 0 (MINPM of 15 μg/m3 in both HRM and LRM) to 95,000 (MINPM of 0 μg/m3 in HRM) people.

  14. Methodological extensions of meta-analysis with excess relative risk estimates. Application to risk of second malignant neoplasms among childhood cancer survivors treated with radiotherapy

    Although radiotherapy is recognized as an established risk factor for second malignant neoplasms (SMNs), the dose response of SMNs following radiotherapy has not been well characterized. In our previous meta-analysis of the risks of SMNs occurring among children who have received radiotherapy, the small number of eligible studies precluded a detailed evaluation. Therefore, to increase the number of eligible studies, we developed a method of calculating excess relative risk (ERR) per Gy estimates from studies for which the relative risk estimates for several dose categories were available. Comparing the calculated ERR with that described in several original papers validated the proposed method. This enabled us to increase the number of studies, which we used to conduct a meta-analysis. The overall ERR per Gy estimate of radiotherapy over 26 relevant studies was 0.60 (95% CI: 0.30-1.20), which is smaller than the corresponding estimate for atomic bomb survivors exposed to radiation as young children (1.7; 95% CI: 1.1-2.5). A significant decrease in ERR per Gy with increase in age at exposure (0.85 times per annual increase) was observed in the meta-regression. Heterogeneity was suggested by Cochran's Q statistic (P < 0.001), which may be partly accounted for by age at exposure. (author)

  15. Estimating Accurate Relative Spacecraft Angular Position from DSN VLBI Phases Using X-Band Telemetry or DOR Tones

    Bagri, Durgadas S.; Majid, Walid

    2009-01-01

    At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.

  16. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    Xuebing Yuan

    2015-05-01

    Full Text Available Inertial navigation based on micro-electromechanical system (MEMS inertial measurement units (IMUs has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  17. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  18. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  19. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  20. HIV Excess Cancers JNCI

    In 2010, an estimated 7,760 new cancers were diagnosed among the nearly 900,000 Americans known to be living with HIV infection. According to the first comprehensive study in the United States, approximately half of these cancers were in excess of what wo

  1. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  2. A relative risk estimation of excessive frequency of malignant tumors in population due to discharges into the atmosphere from fossil-fuel power plants and nuclear power plants

    Exposure of the population (doses to lungs, bone and whole body) due to fossil-fuel power plants (FFPP) is estimated by the example of a large modern coal FFPP taking into account the contents of 226Ra, 228Ra, 210Pb, 210Po, 40K, 232Th in the fly ash and also radon discharges. The doses produced by radionuclides mentioned above for the individuals from the population living within the range of 18km from the FFPP together with the mean collective doses all over the territory of the country used for the agricultural purposes are given. These values are compared with literary data on the doses due to discharges into the atmosphere of inert radioactive gases, 60Co, 137Cs, 90Sr and 131I from nuclear power plants (NPP). It is revealed that the total exposure risk for the near-by population due to fly ash from coal FFPP is greater by about 2 orders than the risk for individuals from the population due to the discharges from NPP at normal operating conditions. The doses produced by the discharges from FFPP working on oil are lower by 1 order than the doses due to the discharges from coal FFPP. The risk of excessive cancer frequency due to chemical carcinogens contained in the discharges from FFPP including some metals is discussed. It is noted that a more complete evaluation of the risk from NPP requires the data on the doses to the population from all the cycles of nuclear fuel production and radioactive waste disposal as well as the predicted information on collective doses per power unit of NPP due to an accident

  3. Estimates of Long and Short Term Soil Erosion Rates on Farmland in Semi-Arid West Morocco Using Caesium-137, Excess Lead-210 and Beryllium-7 Measurements

    The aim of the present work was to investigate both long and short term soil erosion and deposition rates on agricultural land in Morocco and to assess the effectiveness of soil conservation techniques by the combined use of environmental radionuclides (137Cs, excess 210Pb and 7Be) as tracers The study area is an experimental station located in Marchouch 68 km from Rabat (western part of Morocco). Experimental plots have been installed in the study field to test the no-till practice under cereals as a soil conservation technique, and comparing it to the conventional tillage system. Fallout 137Cs and 210Pbex allowed a retrospective assessment of long term (50 and 100 yrs respectively) soil redistribution rates while fallout 7Be with a short half-life (53 days) was used to document short term soil erosion associated with short rainfall events for different tillage systems and land uses. From 137Cs and 210Pbex measurements the rates of soil redistribution induced by water erosion were quantified by using the Mass Balance 2 model. The net soil erosion rates obtained were 14.3 t ha-1 a-1 and 12.1 ha-1 a-1 for 137Cs and 210Pbex respectively resulting in a high sediment delivery ratio of about 92%. Data on soil redistribution generated by the use of both radionuclides are similar, indicating that the soil erosion rate did not change significantly during the last 100 years. In addition, the soil redistribution rates due to tillage were estimated using the Mass Balance 3 model. The global results obtained from 7Be measurements during the period 2004-2007 suggest that the soil loss is reduced by up to 30% when no-till management is practised when compared to the conventional tillage or uncultivated soil. (author)

  4. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  5. Towards accurate dose accumulation for step- and -shoot IMRT. Impact of weighting schemes and temporal image resolution on the estimation of dosimetric motion effects

    Purpose: Breathing-induced motion effects on dose distributions in radiotherapy can be analyzed using 4D CT image sequences and registration-based dose accumulation techniques. Often simplifying assumptions are made during accumulation. In this paper, we study the dosimetric impact of two aspects which may be especially critical for IMRT treatment: the weighting scheme for the dose contributions of IMRT segments at different breathing phases and the temporal resolution of 4D CT images applied for dose accumulation. Methods: Based on a continuous problem formulation a patient- and plan-specific scheme for weighting segment dose contributions at different breathing phases is derived for use in step- and -shoot IMRT dose accumulation. Using 4D CT data sets and treatment plans for 5 lung tumor patients, dosimetric motion effects as estimated by the derived scheme are compared to effects resulting from a common equal weighting approach. Effects of reducing the temporal image resolution are evaluated for the same patients and both weighting schemes. Results: The equal weighting approach underestimates dosimetric motion effects when considering single treatment fractions. Especially interplay effects (relative misplacement of segments due to respiratory tumor motion) for IMRT segments with only a few monitor units are insufficiently represented (local point differences > 25% of the prescribed dose for larger tumor motion). The effects, however, tend to be averaged out over the entire treatment course. Regarding temporal image resolution, estimated motion effects in terms of measures of the CTV dose coverage are barely affected (in comparison to the full resolution) when using only half of the original resolution and equal weighting. In contrast, occurence and impact of interplay effects are poorly captured for some cases (large tumor motion, undersized PTV margin) for a resolution of 10/14 phases and the more accurate patient- and plan-specific dose accumulation scheme

  6. Attenuation studies of beta particles in glass, PVC, stainless steel for accurate activity estimation of 32P coated to coronary stents

    Restenosis or closure of the artery due to wound healing is one of the problems in post-coronary intervention, such as angioplasty. Intravascular irradiation with beta particles has shown to prevent restenosis to a good extent. In particular, beta particles emitting radioisotopes such as 32P and 90Sr are ideal for local irradiation as 95% dose in tissues is delivered within 4 mm of the source position. 32P coated stents are in use for intravascular brachytherapy source due to its high dose rate delivery to the exposed tissues. The high radiation dose rate delivered by such intravascular radioactive stent in a short span of implantation is an appealing approach to prevent restenosis by non-selectively killing dividing cells. Radiation dose of the order of 15 to 20 Gy is needed to be delivered to the tissues by the implanted 32P stent of about 150-222 KBq activity for prevention of restenosis. However, the accuracy of the dose delivered to the implanted tissues depends on how accurately the activity of 32P in the stent is measured. The quantification of activity is done by using a teletector prior to dispatch of the stents to hospital. In the present paper, the dose measured with different materials which are used for estimating the 32P leached from radioactive stents and correlated it with the direct measurement done by using teletector is given

  7. High-resolution TanDEM-X DEM: An accurate method to estimate lava flow volumes at Nyamulagira Volcano (D. R. Congo)

    Albino, F.; Smets, B.; d'Oreye, N.; Kervyn, F.

    2015-06-01

    Nyamulagira and Nyiragongo are two of the most active volcanoes in Africa, but their eruptive histories are poorly known. Assessing lava flow volumes in the region remains difficult, as field surveys are often impossible and available Digital Elevation Models (DEMs) do not have adequate spatial or temporal resolutions. We therefore use TerraSAR-X add-on for Digital Elevation Measurement (TanDEM-X) interferometry to produce a series of 0.15 arc sec (˜5 m) DEMs from between 2011 and 2012 over these volcanoes. TanDEM-X DEMs have an absolute vertical accuracy of 1.6 m, resulting from the comparison of elevation with GPS measurements acquired around Nyiragongo. The difference between TanDEM-X-derived DEMs from before and after the 2011-2012 eruption of Nyamulagira provides an accurate thickness map of the lava flow emplaced during that activity. Values range from 3 m along the margins to 35 m in the middle, with a mean of 12.7 m. The erupted volume is 305.2 ± 36.0 × 106 m3. Height errors on thickness depend on the land covered by the flow and range from 0.4 m in old lavas to 5.5 m in dense vegetation. We also reevaluate the volume of historical eruptions at Nyamulagira since 2001 from the difference between TanDEM-X and SRTM 1 arc sec DEMs and compare them to previous work. Planimetric methods used in literature are consistent with our results for short-duration eruptions but largely underestimate the volume of the long-lived 2011-2012 eruption. Our new estimates of erupted volumes suggest that the mean eruption rate and the magma supply rate were relatively constant at Nyamulagira during 2001-2012, respectively, 23.1 m3 s-1 and 0.9 m3 s-1.

  8. Excessive Sweating (Hyperhidrosis)

    ... and rashes clinical tools newsletter | contact Share | Excessive Sweating (Hyperhidrosis) Information for adults A A A Profusely sweaty ... the medical name for excessive sweating, involves overactive sweat glands, usually of a defined body part, including ...

  9. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations.

    McMillan, K; Bostani, M; McCollough, C; McNitt-Gray, M

    2015-01-01

    PURPOSE: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. METHODS: For 10 patients who received clinically-indicated chest (n=5) and ab...

  10. On excesses of frames

    Bakić, Damir; Berić, Tomislav

    2014-01-01

    We show that any two frames in a separable Hilbert space that are dual to each other have the same excess. Some new relations for the analysis resp. synthesis operators of dual frames are also derived. We then prove that pseudo-dual frames and, in particular, approximately dual frames have the same excess. We also discuss various results on frames in which excesses of frames play an important role.

  11. Is accurate and reliable blood loss estimation the 'crucial step' in early detection of postpartum haemorrhage: an integrative review of the literature

    Hancock, Angela; Weeks, Andrew D; Lavender, Dame Tina

    2015-01-01

    Background Postpartum haemorrhage (PPH) is the leading cause of maternal mortality in low-income countries and severe maternal morbidity in many high-income countries. Poor outcomes following PPH are often attributed to delays in the recognition and treatment of PPH. Experts have suggested that improving the accuracy and reliability of blood loss estimation is the crucial step in preventing death and morbidity from PPH. However, there is little guidance on how this can be achieved. The aim of...

  12. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237

  13. Excess wind power

    Østergaard, Poul Alberg

    2005-01-01

    analyses it is analysed how excess productions are better utilised; through conversion into hydrogen of through expansion of export connections thereby enabling sales. The results demonstrate that particularly hydrogen production is unviable under current costs but transmission expansion could be...

  14. Is the predicted postoperative FEV1 estimated by planar lung perfusion scintigraphy accurate in patients undergoing pulmonary resection? Comparison of two processing methods

    Estimation of postoperative forced expiratory volume in 1 s (FEV1) with radionuclide lung scintigraphy is frequently used to define functional operability in patients undergoing lung resection. We conducted a study to outline the reliability of planar quantitative lung perfusion scintigraphy (QLPS) with two different processing methods to estimate the postoperative lung function in patients with resectable lung disease. Forty-one patients with a mean age of 57±12 years who underwent either a pneumonectomy (n=14) or a lobectomy (n=27) were included in the study. QLPS with Tc-99m macroaggregated albumin was performed. Both three equal zones were generated for each lung [zone method (ZM)] and more precise regions of interest were drawn according to their anatomical shape in the anterior and posterior projections [lobe mapping method (LMM)] for each patient. The predicted postoperative (ppo) FEV1 values were compared with actual FEV1 values measured on postoperative day 1 (pod1 FEV1) and day 7 (pod7 FEV1). The mean of preoperative FEV 1 and ppoFEV1 values was 2.10±0.57 and 1.57±0.44 L, respectively. The mean of Pod1FEV1 (1.04±0.30 L) was lower than ppoFEV1 (p0.05). PpoFEV1 values predicted by both the zone and LMMs overestimated the actual measured lung volumes in patients undergoing pulmonary resection in the early postoperative period. LMM is not superior to ZM. (author)

  15. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear–quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18–30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8–30.9 Gy) and 22.0 Gy (range, 20.2–26.6 Gy), respectively. By use of conventional values for α/β, volume parameter n, 50% complication probability dose TD50, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of α/β and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of α/β and n yielded better predictions (0.7 complications), with n = 0.023 and α/β = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high α/β value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models traditionally used to estimate spinal cord NTCP

  16. 证券市场指数的尖峰厚尾特征与风险量估计%The Characteristics of Excess Kurtosis and Heavy-tail of Stock Market Indies and the Estimation of Risk Measures

    杨昕

    2012-01-01

    The empirical analysis of the four stock market indies of DOW,Nasdaq,S&P500 and FTSE100 illustrates that the distributions of the log returns have the characteristics of excess kurtosis and heavy-tail,and that Logistic distribution fits very well for the returns. At the same time,the estimation formulas of the risk measures VaR and CVaR based on Logistics distribution are given,and the risk measures of the log returns of the four stock market indies are reported.%在对DOW,Nasdaq,S&P500和FTSE100等四个证券市场指数进行实证分析基础上,展示了证券市场指数的对数收益率具有尖峰厚尾的分布特征,并利用Logistic分布得到了很好的拟合,同时给出了基于Logistic分布的风险量VaR和CVaR的估计公式,以此计算证券市场指数的对数收益率的风险量VaR和CVaR的估计值.

  17. Reducing Excessive Television Viewing.

    Jason, Leonard A.; Rooney-Rebeck, Patty

    1984-01-01

    A youngster who excessively watched television was placed on a modified token economy: earned tokens were used to activate the television for set periods of time. Positive effects resulted in the child's school work, in the amount of time his family spent together, and in his mother's perception of family social support. (KH)

  18. Accurate determination of antenna directivity

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  19. Accurate estimate of the critical exponent $\

    Clisby, Nathan

    2010-01-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to $33 \\times 10^6$ steps. Consequently the critical exponent $\

  20. 战斗机战术技术要求的分析和全机重量的计算%An Accurate Estimate of Take-off Weight of Fighter Aircraft at Conceptual Design Stage

    杨华保

    2001-01-01

    在对战术技术要求指标分类的基础上,提出了用重量分析法进行战术技术要求分析的方法;对全机重量的计算进行了研究,提出了用统计方法计算空机重量和用对飞行剖面分段计算燃油重量的方法。经过实际型号的验算,证明方法可行,结果令人满意。%I developed a software for estimating take-off weight of fighter aircraft according to tactical and technical requirements at the conceptual design stage. Take-off weight consists of empty weight, fuel weight and payload weight. Estimating take-off weight is an iterative process, as many factors including empty weight, fuel weight and tactical and technical requirements, are related to take-off weight. In my software, empty weight can be calculated from take-off weight by a formula deduced from statistical data relating empty weight to take-off weight. In estimating fuel weight, I divide the mission profile into several basic segments and calculate the fuel consumption of each basic segment. My software can make the iterative process of accurately estimating take-off weight of a fighter aircraft quite easy. Data of some existing fighters confirm preliminarily that my software is reliable.

  1. The Virtual Diphoton Excess

    Stolarski, Daniel

    2016-01-01

    Interpreting the excesses around 750 GeV in the diphoton spectra to be the signal of a new heavy scalar decaying to photons, we point out the possibility of looking for correlated signals with virtual photons. In particular, we emphasize that the effective operator that generates the diphoton decay will also generate decays to two leptons and a photon, as well as to four leptons, independently of the new resonance couplings to $Z\\gamma$ and $ZZ$. Depending on the relative sizes of these effective couplings, we show that the virtual diphoton component can make up a sizable, and sometimes dominant, contribution to the total $2\\ell \\gamma$ and $4\\ell$ partial widths. We also discuss modifications to current experimental cuts in order to maximize the sensitivity to these virtual photon effects. Finally, we briefly comment on prospects for channels involving other Standard Model fermions as well as more exotic decay possibilities of the putative resonance.

  2. Abundance, Excess, Waste

    Rox De Luca

    2016-02-01

    Her recent work focuses on the concepts of abundance, excess and waste. These concerns translate directly into vibrant and colourful garlands that she constructs from discarded plastics collected on Bondi Beach where she lives. The process of collecting is fastidious, as is the process of sorting and grading the plastics by colour and size. This initial gathering and sorting process is followed by threading the components onto strings of wire. When completed, these assemblages stand in stark contrast to the ease of disposability associated with the materials that arrive on the shoreline as evidence of our collective human neglect and destruction of the environment around us. The contrast is heightened by the fact that the constructed garlands embody the paradoxical beauty of our plastic waste byproducts, while also evoking the ways by which those byproducts similarly accumulate in randomly assorted patterns across the oceans and beaches of the planet.

  3. The High Price of Excessive Alcohol Consumption

    2011-10-17

    This podcast is based on the October 2011 release of a report estimating the economic cost of excessive drinking. Excessive alcohol consumption cost the U. S. $223.5 billion in 2006, or about $1.90 per drink. Over three-quarters (76%) of these costs were due to binge drinking, defined as consuming 4 or more alcoholic beverages per occasion for women or 5 or more drinks per occasion for men.  Created: 10/17/2011 by National Center for Chronic Disease Prevention and Health Promotion.   Date Released: 10/17/2011.

  4. Tentativas de suicídio: fatores prognósticos e estimativa do excesso de mortalidade Tentativas de suicidio: factores pronósticos y estimativa del exceso de mortalidad Attempted suicide: prognostic factors and estimated excess mortality

    Carlos Eduardo Leal Vidal

    2013-01-01

    hombres, en las personas casadas y en aquellos con edad superior a los 60 años. La razón de mortalidad estandarizada evidenció un exceso de mortalidad por suicidio. Los resultados del estudio mostraron que la tasa de mortalidad entre pacientes que intentaron el suicidio fue superior a la esperada en la población general, indicando la necesidad de mejorar los cuidados a la salud de esos individuos.This retrospective cohort study aimed to analyze the epidemiological profile of individuals that attempted suicide from 2003 to 2009 in Barbacena, Minas Gerais State, Brazil, to calculate the mortality rate from suicide and other causes, and to estimate the risk of death in these individuals. Data were collected from police reports and death certificates. Survival analysis was performed and Cox multiple regression was used. Among the 807 individuals that attempted suicide, there were 52 deaths: 12 by suicide, 10 from external causes, and 30 from other causes. Ninety percent of suicide deaths occurred within 24 months after the attempt. Risk of death was significantly greater in males, married individuals, and individuals over 60 years of age. Standardized mortality ratio showed excess mortality by suicide. The findings showed that the mortality rate among patients that had attempted suicide was higher than expected in the general population, indicating the need to improve health care for these individuals.

  5. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  6. Multidetector row computed tomography may accurately estimate plaque vulnerability: does MDCT accurately estimate plaque vulnerability? (Pro).

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal ECG and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. PMID:21532180

  7. Excess Molar Volumes and Viscosities of Binary Mixture of Diethyl Carbonate+Ethanol at Different Temperatures

    MA Peisheng; LI Nannan

    2005-01-01

    The purpose of this work was to report excess molar volumes and dynamic viscosities of the binary mixture of diethyl carbonate (DEC)+ethanol. Densities and viscosities of the binary mixture of DEC+ethanol at temperatures 293.15 K-343.15 K and atmospheric pressure were determined over the entire composition range. Densities of the binary mixture of DEC+ethanol were measured by using a vibrating U-shaped sample tube densimeter. Viscosities were determined by using Ubbelohde suspended-level viscometer. Densities are accurate to 1.0×10-5 g·cm-3, and viscosities are reproducible within ±0.003 mPa·s. From these data, excess molar volumes and deviations in viscosity were calculated. Positive excess molar volumes and negative deviations in viscosity for DEC+ethanol system are due to the strong specific interactions.All excess molar vo-lumes and deviations in viscosity fit to the Redlich-Kister polynomial equation.The fitting parameters were presented,and the average deviations and standard deviations were also calculated.The errors of correlation are very small.It proves that it is valuable for estimating densities and viscosities of the binary mixture by the correlated equation.

  8. Does Excessive Pronation Cause Pain?

    Olesen, Christian Gammelgaard; Nielsen, RG; Rathleff, M;

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  9. Does Excessive Pronation Cause Pain?

    Mølgaard, Carsten Møller; Olesen Gammelgaard, Christian; Nielsen, R. G.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  10. Does excessive pronation cause pain?

    Olesen, Christian Gammelgaard; Nielsen, R.G.; Rathleff, M.;

    2008-01-01

    Excessive pronation could be an inborn abnormality or an acquired foot disorder caused by overuse, inadequate supported shoes or inadequate foot training. When the muscles and ligaments of the foot are insufficient it can cause an excessive pronation of the foot. The current treatment consist of ...

  11. Student estimations of peer alcohol consumption

    Stock, Christiane; Mcalaney, John; Pischke, Claudia;

    2014-01-01

    BACKGROUND: The Social Norms Approach, with its focus on positive behaviour and its consensus orientation, is a health promotion intervention of relevance to the context of a Health Promoting University. In particular, the approach could assist with addressing excessive alcohol consumption. AIM......: This article aims to discuss the link between the Social Norms Approach and the Health Promoting University, and analyse estimations of peer alcohol consumption among European university students. METHODS: A total of 4392 students from universities in six European countries and Turkey were asked to report...... their own typical alcohol consumption per day and to estimate the same for their peers of same sex. Students were classified as accurate or inaccurate estimators of peer alcohol consumption. Socio-demographic factors and personal alcohol consumption were examined as predictors for an accurate estimation...

  12. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Widespread Excess Ice in Arcadia Planitia, Mars

    Bramson, Ali M; Putzig, Nathaniel E; Sutton, Sarah; Plaut, Jeffrey J; Brothers, T Charles; Holt, John W

    2015-01-01

    The distribution of subsurface water ice on Mars is a key constraint on past climate, while the volumetric concentration of buried ice (pore-filling versus excess) provides information about the process that led to its deposition. We investigate the subsurface of Arcadia Planitia by measuring the depth of terraces in simple impact craters and mapping a widespread subsurface reflection in radar sounding data. Assuming that the contrast in material strengths responsible for the terracing is the same dielectric interface that causes the radar reflection, we can combine these data to estimate the dielectric constant of the overlying material. We compare these results to a three-component dielectric mixing model to constrain composition. Our results indicate a widespread, decameters-thick layer that is excess water ice ~10^4 km^3 in volume. The accumulation and long-term preservation of this ice is a challenge for current Martian climate models.

  14. Quantum theory of excess noise

    Bardroff, P. J.; Stenholm, S

    1999-01-01

    We analyze the excess noise in the framework of the conventional quantum theory of laser-like systems. Our calculation is conceptually simple and our result also shows a correction to the semi-classical result derived earlier.

  15. Excess mortality following hip fracture

    Abrahamsen, B; van Staa, T; Ariely, R;

    2009-01-01

    Summary This systematic literature review has shown that patients experiencing hip fracture after low-impact trauma are at considerable excess risk for death compared with nonhip fracture/community control populations. The increased mortality risk may persist for several years thereafter......, highlighting the need for interventions to reduce this risk.Patients experiencing hip fracture after low-impact trauma are at considerable risk for subsequent osteoporotic fractures and premature death. We conducted a systematic review of the literature to identify all studies that reported unadjusted and...... excess mortality rates for hip fracture. Although a lack of consistent study design precluded any formal meta-analysis or pooled analysis of the data, we have shown that hip fracture is associated with excess mortality (over and above mortality rates in nonhip fracture/community control populations...

  16. Pricing Excess-of-loss Reinsurance Contracts Against Catastrophic Loss

    J. David Cummins; Lewis, Christopher M.; Phillips, Richard D.

    1998-01-01

    This paper develops a pricing methodology and pricing estimates for the proposed Federal excess-of- loss (XOL) catastrophe reinsurance contracts. The contracts, proposed by the Clinton Administration, would provide per-occurrence excess-of-loss reinsurance coverage to private insurers and reinsurers, where both the coverage layer and the fixed payout of the contract are based on insurance industry losses, not company losses. In financial terms, the Federal government would be selling earthqua...

  17. Syndromes that Mimic an Excess of Mineralocorticoids.

    Sabbadin, Chiara; Armanini, Decio

    2016-09-01

    Pseudohyperaldosteronism is characterized by a clinical picture of hyperaldosteronism with suppression of renin and aldosterone. It can be due to endogenous or exogenous substances that mimic the effector mechanisms of aldosterone, leading not only to alterations of electrolytes and hypertension, but also to an increased inflammatory reaction in several tissues. Enzymatic defects of adrenal steroidogenesis (deficiency of 17α-hydroxylase and 11β-hydroxylase), mutations of mineralocorticoid receptor (MR) and alterations of expression or saturation of 11-hydroxysteroid dehydrogenase type 2 (apparent mineralocorticoid excess syndrome, Cushing's syndrome, excessive intake of licorice, grapefruits or carbenoxolone) are the main causes of pseudohyperaldosteronism. In these cases treatment with dexamethasone and/or MR-blockers is useful not only to normalize blood pressure and electrolytes, but also to prevent the deleterious effects of prolonged over-activation of MR in epithelial and non-epithelial tissues. Genetic alterations of the sodium channel (Liddle's syndrome) or of the sodium-chloride co-transporter (Gordon's syndrome) cause abnormal sodium and water reabsorption in the distal renal tubules and hypertension. Treatment with amiloride and thiazide diuretics can respectively reverse the clinical picture and the renin aldosterone system. Finally, many other more common situations can lead to an acquired pseudohyperaldosteronism, like the expansion of volume due to exaggerated water and/or sodium intake, and the use of drugs, as contraceptives, corticosteroids, β-adrenergic agonists and FANS. In conclusion, syndromes or situations that mimic aldosterone excess are not rare and an accurate personal and pharmacological history is mandatory for a correct diagnosis and avoiding unnecessary tests and mistreatments. PMID:27251484

  18. Determination of Enantiomeric Excess of Glutamic Acids by Lab-made Capillary Array Electrophoresis

    Jun WANG; Kai Ying LIU; Li WANG; Ji Ling BAI

    2006-01-01

    Simulated enantiomeric excess of glutamic acid was determined by a lab-made sixteen-channel capillary array electrophoresis with confocal fluorescent rotary scanner. The experimental results indicated that the capillary array electrophoresis method can accurately determine the enantiomeric excess of glutamic acid and can be used for high-throughput screening system for combinatorial asymmetric catalysis.

  19. Total Liability for Excessive Harm

    Cooter, Robert D; Porat, Ariel

    2004-01-01

    In many circumstances, the total harm caused by everyone is verifiable, and the harm caused by each individual is unverifiable. For example, the environmental agency can measure the total harm caused by pollution much easier than it can measure the harm caused by each individual polluter. In these circumstances, implementing the usual liability rules or externality taxes is impossible. We propose a novel solution: Hold each participant in the activity responsible for all of the excessive harm...

  20. Diphoton Excess through Dark Mediators

    Chen, Chien-Yi; Pospelov, Maxim; Zhong, Yi-Ming

    2016-01-01

    Preliminary ATLAS and CMS results from the first 13 TeV LHC run have encountered an intriguing excess of events in the diphoton channel around the invariant mass of 750 GeV. We investigate a possibility that the current excess is due to a heavy resonance decaying to light metastable states, which in turn give displaced decays to very highly collimated $e^+e^-$ pairs. Such decays may pass the photon selection criteria, and successfully mimic the diphoton events, especially at low counts. We investigate two classes of such models, characterized by the following underlying production and decay chains: $gg \\to S\\to A'A'\\to (e^+e^-)(e^+e^-)$ and $q\\bar q \\to Z' \\to sa\\to (e^+e^-)(e^+e^-)$, where at the first step a heavy scalar, $S$, or vector, $Z'$, resonances are produced that decay to light metastable vectors, $A'$, or (pseudo-)scalars, $s$ and $a$. Setting the parameters of the models to explain the existing excess, and taking the ATLAS detector geometry into account, we marginalize over the properties of heav...

  1. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  2. Study of accurate volume measurement system for plutonium nitrate solution

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  3. Excess Returns and Systemic Risk for Chile and Mexico Excess Returns and Systemic Risk for Chile and Mexico

    Paul D. McNelis

    2000-03-01

    Full Text Available This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. The results show no evidence of a significant reduction in systemic risk rather excess returns have remained volatile for both countries. For Chile, excess returns are significantly related to own lagged levels, while for Mexico excess are significantly related to own lagged variances. The influence of global factors are relatively minimal compared to potential home factors. This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. The results show no evidence of a significant reduction in systemic risk rather excess returns have remained volatile for both countries. For Chile, excess returns are significantly related to own lagged levels, while for Mexico excess are significantly related to own lagged variances. The influence of global factors are relatively minimal compared to potential home factors.

  4. Excess Early Mortality in Schizophrenia

    Laursen, Thomas Munk; Nordentoft, Merete; Mortensen, Preben Bo

    2014-01-01

    Schizophrenia is often referred to as one of the most severe mental disorders, primarily because of the very high mortality rates of those with the disorder. This article reviews the literature on excess early mortality in persons with schizophrenia and suggests reasons for the high mortality as...... well as possible ways to reduce it. Persons with schizophrenia have an exceptionally short life expectancy. High mortality is found in all age groups, resulting in a life expectancy of approximately 20 years below that of the general population. Evidence suggests that persons with schizophrenia may not...

  5. Evaluation of Excess Thermodynamic Parameters in a Binary Liquid Mixture (Cyclohexane + O-Xylene) at Different Temperatures

    K. Narendra; Narayanamurthy, P.; CH. Srinivasu

    2010-01-01

    The ultrasonic velocity, density and viscosity in binary liquid mixture cyclohexane with o-xylene have been determined at different temperatures from 303.15 to 318.15 K over the whole composition range. The data have been utilized to estimate the excess adiabatic compressibility (βE), excess volumes (VE), excess intermolecular free length (LfE), excess internal pressure (πE) and excess enthalpy (HE) at the above temperatures. The excess values have been found to be useful in estimating the st...

  6. Outflows in Sodium Excess Objects

    Park, Jongwon; Yi, Sukyoung K

    2015-01-01

    van Dokkum and Conroy revisited the unexpectedly strong Na I lines at 8200 A found in some giant elliptical galaxies and interpreted it as evidence for unusually bottom-heavy initial mass function. Jeong et al. later found a large population of galaxies showing equally-extraordinary Na D doublet absorption lines at 5900 A (Na D excess objects: NEOs) and showed that their origins can be different for different types of galaxies. While a Na D excess seems to be related with the interstellar medium (ISM) in late-type galaxies, smooth-looking early-type NEOs show little or no dust extinction and hence no compelling sign of ISM contributions. To further test this finding, we measured the doppler components in the Na D lines. We hypothesized that ISM would have a better (albeit not definite) chance of showing a blueshift doppler departure from the bulk of the stellar population due to outflow caused by either star formation or AGN activities. Many of the late-type NEOs clearly show blueshift in their Na D lines, wh...

  7. Excess electron transport in cryoobjects

    Eshchenko, D G; Brewer, J H; Cottrell, S P; Cox, S F J

    2003-01-01

    Experimental results on excess electron transport in solid and liquid phases of Ne, Ar, and solid N sub 2 -Ar mixture are presented and compared with those for He. Muon spin relaxation technique in frequently switching electric fields was used to study the phenomenon of delayed muonium formation: excess electrons liberated in the mu sup + ionization track converge upon the positive muons and form Mu (mu sup + e sup -) atoms. This process is shown to be crucially dependent upon the electron's interaction with its environment (i.e., whether it occupies the conduction band or becomes localized in a bubble of tens of angstroms in radius) and upon its mobility in these states. The characteristic lengths involved are 10 sup - sup 6 -10 sup - sup 4 cm, the characteristic times range from nanoseconds to tens microseconds. Such a microscopic length scale sometimes enables the electron spend its entire free lifetime in a state which may not be detected by conventional macroscopic techniques. The electron transport proc...

  8. Improved manometric setup for the accurate determination of supercritical carbon dioxide sorption

    Van Hemert, P.; Bruining, H.; Rudolph, E.S.J.; Wolf, K.H.A.A.; Maas, J.G.

    2009-01-01

    An improved version of the manometric apparatus and its procedures for measuring excess sorption of supercritical carbon dioxide are presented in detail with a comprehensive error analysis. An improved manometric apparatus is necessary for accurate excess sorption measurements with supercritical car

  9. [Iodine excess induced thyroid dysfunction].

    Egloff, Michael; Philippe, Jacques

    2016-04-20

    The principle sources of iodine overload, amiodarone and radiologic contrast media, are frequently used in modern medicine. The thyroid gland exerts a protective effect against iodine excess by suppressing iodine internalization into the thyrocyte and iodine organification, the Wolff-Chaikoff effect. Insufficiency of this effect or lack of escape from it leads to hypo- or hyperthyroidism respectively. Amiodarone induced thyrotoxicosis is a complex condition marked by two different pathophysiological mechanisms with different treatments. Thyroid metabolism changes after exposure to radiologic contrast media are frequent, but they rarely need to be treated. High risk individuals need to be identifed in order to delay the exam or to monitor thyroid function or apply prophylactic measures in selected cases. PMID:27276725

  10. Excess water dynamics in hydrotalcite: QENS study

    S Mitra; A Pramanik; D Chakrabarty; R Mukhopadhyay

    2004-08-01

    Results of the quasi-elastic neutron scattering (QENS) measurements on the dynamics of excess water in hydrotalcite sample with varied content of excess water are reported. Translational motion of excess water can be best described by random translational jump diffusion model. The observed increase in translational diffusivity with increase in the amount of excess water is attributed to the change in binding of the water molecules to the host layer.

  11. 10 CFR 904.10 - Excess energy.

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Excess energy. 904.10 Section 904.10 Energy DEPARTMENT OF ENERGY GENERAL REGULATIONS FOR THE CHARGES FOR THE SALE OF POWER FROM THE BOULDER CANYON PROJECT Power Marketing § 904.10 Excess energy. (a) If excess Energy is determined by the United States to be...

  12. 7 CFR 985.56 - Excess oil.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Excess oil. 985.56 Section 985.56 Agriculture... HANDLING OF SPEARMINT OIL PRODUCED IN THE FAR WEST Order Regulating Handling Volume Limitations § 985.56 Excess oil. Oil of any class in excess of a producer's applicable annual allotment shall be identified...

  13. A Discussion on Mean Excess Plots

    Ghosh, Souvik; Resnick, Sidney I.

    2009-01-01

    A widely used tool in the study of risk, insurance and extreme values is the mean excess plot. One use is for validating a generalized Pareto model for the excess distribution. This paper investigates some theoretical and practical aspects of the use of the mean excess plot.

  14. THE IMPORTANCE OF THE STANDARD SAMPLE FOR ACCURATE ESTIMATION OF THE CONCENTRATION OF NET ENERGY FOR LACTATION IN FEEDS ON THE BASIS OF GAS PRODUCED DURING THE INCUBATION OF SAMPLES WITH RUMEN LIQUOR

    T ŽNIDARŠIČ

    2003-10-01

    Full Text Available The aim of this work was to examine the necessity of using the standard sample at the Hohenheim gas test. During a three year period, 24 runs of forage samples were incubated with rumen liquor in vitro. Beside the forage samples also the standard hay sample provided by the Hohenheim University (HFT-99 was included in the experiment. Half of the runs were incubated with rumen liquor of cattle and half with the rumen liquor of sheep. Gas produced during the 24 h incubation of standard sample was measured and compared to a declared value of sample HFT-99. Beside HFT-99, 25 test samples with known digestibility coefficients determined in vivo were included in the experiment. Based on the gas production of HFT-99, it was found that donor animal (cattle or sheep did not significantly affect the activity of rumen liquor (41.4 vs. 42.2 ml of gas per 200 mg dry matter, P>0.1. Neither differences between years (41.9, 41.2 and 42.3 ml of gas per 200 mg dry matter, P>0.1 were significant. However, a variability of about 10% (from 38.9 to 43.7 ml of gas per 200 mg dry matter was observed between runs. In the present experiment, the gas production in HFT-99 was about 6% lower than the value obtained by the Hohenheim University (41.8 vs. 44.43 ml per 200 mg dry matter. This indicates a systematic error between the laboratories. In the case of twenty-five test samples, correction on the basis of the standard sample reduced the average difference of the in vitro estimates of net energy for lactation (NEL from the in vivo determined values. It was concluded that, due to variation between runs and systematical differences in rumen liquor activity between two laboratories, the results of Hohenheim gas test have to be corrected on the basis of standard sample.

  15. Average Potential Temperature of the Upper Mantle and Excess Temperatures Beneath Regions of Active Upwelling

    Putirka, K. D.

    2006-05-01

    The question as to whether any particular oceanic island is the result of a thermal mantle plume, is a question of whether volcanism is the result of passive upwelling, as at mid-ocean ridges, or active upwelling, driven by thermally buoyant material. When upwelling is passive, mantle temperatures reflect average or ambient upper mantle values. In contrast, sites of thermally driven active upwellings will have elevated (or excess) mantle temperatures, driven by some source of excess heat. Skeptics of the plume hypothesis suggest that the maximum temperatures at ocean islands are similar to maximum temperatures at mid-ocean ridges (Anderson, 2000; Green et al., 2001). Olivine-liquid thermometry, when applied to Hawaii, Iceland, and global MORB, belie this hypothesis. Olivine-liquid equilibria provide the most accurate means of estimating mantle temperatures, which are highly sensitive to the forsterite (Fo) contents of olivines, and the FeO content of coexisting liquids. Their application shows that mantle temperatures in the MORB source region are less than temperatures at both Hawaii and Iceland. The Siqueiros Transform may provide the most precise estimate of TpMORB because high MgO glass compositions there have been affected only by olivine fractionation, so primitive FeOliq is known; olivine thermometry yields TpSiqueiros = 1430 ±59°C. A global database of 22,000 MORB show that most MORB have slightly higher FeOliq than at Siqueiros, which translates to higher calculated mantle potential temperatures. If the values for Fomax (= 91.5) and KD (Fe-Mg)ol-liq (= 0.29) at Siqueiros apply globally, then upper mantle Tp is closer to 1485 ± 59°C. Averaging this global estimate with that recovered at Siqueiros yields TpMORB = 1458 ± 78°C, which is used to calculate plume excess temperatures, Te. The estimate for TpMORB defines the convective mantle geotherm, and is consistent with estimates from sea floor bathymetry and heat flow (Stein and Stein, 1992), and

  16. Phytoextraction of excess soil phosphorus

    Sharma, Nilesh C. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Starnes, Daniel L. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States); Sahi, Shivendra V. [Department of Biology, Western Kentucky University, 1906 College Heights Boulevard 11080, Bowling Green, KY 42101-1080 (United States)]. E-mail: shiv.sahi@wku.edu

    2007-03-15

    In the search for a suitable plant to be used in P phytoremediation, several species belonging to legume, vegetable and herb crops were grown in P-enriched soils, and screened for P accumulation potentials. A large variation in P concentrations of different plant species was observed. Some vegetable species such as cucumber (Cucumis sativus) and yellow squash (Cucurbita pepo var. melopepo) were identified as potential P accumulators with >1% (dry weight) P in their shoots. These plants also displayed a satisfactory biomass accumulation while growing on a high concentration of soil P. The elevated activities of phosphomonoesterase and phytase were observed when plants were grown in P-enriched soils, this possibly contributing to high P acquisition in these species. Sunflower plants also demonstrated an increased shoot P accumulation. This study shows that the phytoextraction of phosphorus can be effective using appropriate plant species. - Crop plants such as cucumber, squash and sunflower accumulate phosphorus and thus can be used in the phytoextraction of excess phosphorus from soils.

  17. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Dhabal, Debdas; Chakravarty, Charusita, E-mail: charus@chemistry.iitd.ac.in [Department of Chemistry, Indian Institute of Technology Delhi, New Delhi 110016 (India); Nguyen, Andrew Huy; Molinero, Valeria [Department of Chemistry, University of Utah, Salt Lake City, Utah 84112-0850 (United States); Singh, Murari [Department of Chemical Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Khatua, Prabir; Bandyopadhyay, Sanjoy [Department of Chemistry, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India)

    2015-10-28

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW{sub 16}). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW{sub 20}), silicon (SW{sub 21}), and water (SW{sub 23.15} or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by S{sub trip

  18. Excess entropy and crystallization in Stillinger-Weber and Lennard-Jones fluids

    Molecular dynamics simulations are used to contrast the supercooling and crystallization behaviour of monatomic liquids that exemplify the transition from simple to anomalous, tetrahedral liquids. As examples of simple fluids, we use the Lennard-Jones (LJ) liquid and a pair-dominated Stillinger-Weber liquid (SW16). As examples of tetrahedral, water-like fluids, we use the Stillinger-Weber model with variable tetrahedrality parameterized for germanium (SW20), silicon (SW21), and water (SW23.15 or mW model). The thermodynamic response functions show clear qualitative differences between simple and water-like liquids. For simple liquids, the compressibility and the heat capacity remain small on isobaric cooling. The tetrahedral liquids in contrast show a very sharp rise in these two response functions as the lower limit of liquid-phase stability is reached. While the thermal expansivity decreases with temperature but never crosses zero in simple liquids, in all three tetrahedral liquids at the studied pressure, there is a temperature of maximum density below which thermal expansivity is negative. In contrast to the thermodynamic response functions, the excess entropy on isobaric cooling does not show qualitatively different features for simple and water-like liquids; however, the slope and curvature of the entropy-temperature plots reflect the heat capacity trends. Two trajectory-based computational estimation methods for the entropy and the heat capacity are compared for possible structural insights into supercooling, with the entropy obtained from thermodynamic integration. The two-phase thermodynamic estimator for the excess entropy proves to be fairly accurate in comparison to the excess entropy values obtained by thermodynamic integration, for all five Lennard-Jones and Stillinger-Weber liquids. The entropy estimator based on the multiparticle correlation expansion that accounts for both pair and triplet correlations, denoted by Strip, is also studied. Strip is a

  19. 26 CFR 54.4981A-1T - Tax on excess distributions and excess accumulations (temporary).

    2010-04-01

    ... same for all individuals. (b) Base for excise tax under grandfather rule. Although the portion of any... 26 Internal Revenue 17 2010-04-01 2010-04-01 false Tax on excess distributions and excess... Tax on excess distributions and excess accumulations (temporary). The following questions and...

  20. 75 FR 27572 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income

    2010-05-17

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income AGENCY... subject proposal. Project owners are permitted to retain Excess Income for projects under terms and conditions established by HUD. Owners must request to retain some or all of their Excess Income. The...

  1. Patterns of Excess Cancer Risk among the Atomic Bomb Survivors

    Pierce, Donald A.

    1996-05-01

    I will indicate the major epidemiological findings regarding excess cancer among the atomic-bomb survivors, with some special attention to what can be said about low-dose risks. This will be based on 1950--90 mortality follow-up of about 87,000 survivors having individual radiation dose estimates. Of these about 50,000 had doses greater than 0.005 Sv, and the remainder serve largely as a comparison group. It is estimated that for this cohort there have been about 400 excess cancer deaths among a total of about 7800. Since there are about 37,000 subjects in the dose range .005--.20 Sv, there is substantial low-dose information in this study. The person-year-Seivert for the dose range under .20 Sv is greater than for any one of the 6 study cohorts of U.S., Canadian, and U.K. nuclear workers; and is equal to about 60% of the total for the combined cohorts. It is estimated, without linear extrapolation from higher doses, that for the RERF cohort there have been about 100 excess cancer deaths in the dose range under .20 Sv. Both the dose-response and age-time patterns of excess risk are very different for solid cancers and leukemia. One of the most important findings has been that the solid cancer (absolute) excess risk has steadily increased over the entire follow-up to date, similarly to the age-increase of the background risk. About 25% of the excess solid cancer deaths occurred in the last 5 years of the 1950--90 follow-up. On the contrary most of the excess leukemia risk occurred in the first few years following exposure. The observed dose response for solid cancers is very linear up to about 3 Sv, whereas for leukemia there is statistically significant upward curvature on that range. Very little has been proposed to explain this distinction. Although there is no hint of upward curvature or a threshold for solid cancers, the inherent difficulty of precisely estimating very small risks along with radiobiological observations that many radiation effects are nonlinear

  2. Estimation of excess numbers of viral diarrheal cases among children aged <5 years in Beijing with adjusted Serfling regression model%应用调整Serfling回归模型估计北京市5岁以下儿童病毒性腹泻相关超额病例数

    贾蕾; 王小莉; 吴双胜; 马建新; 李洪军; 高志勇; 王全意

    2016-01-01

    Objective To estimate the excess numbers of viral diarrheal cases among children aged <5 years in Beijing from 1 January 2011 to 23 May 2015.Methods The excess numbers of diarrheal cases among the children aged <5 years were estimated by using weekly outpatient visit data from two children' s hospital in Beijing and adjusted Serfling regression model.Results The incidence peaks of viral diarrhea were during 8th-10th week and 40th-42nd week in 2011,40th-4th week in 2012,43rd-49th week in 2013 and 45th week in 2014 to 11th week in 2015 respectively.The excess numbers of viral diarrheal cases among children aged <5 years in the two children's hospital were 911(95%CI:261-1 561),1 998(95%CI:1 250-2 746),1 645 (95%CI:891-2 397),2 806(95%CI:1 938-3 674) and 1 822(95% CI:614-3 031) respectively,accounting for 40.38% (95% CI:11.57%-69.19%),44.21%(95%CI:27.66%-60.77%),45.08%(95%CI:24.42%-65.69%),60.87% (95%CI:42.04%-79.70%) and 66.62% (95%CI:22.45%-110.82%) of total outpatient visits due to diarrhea during 2011-2015,respectively.Totally,the excess number of viral diarrheal cases among children aged <5 years in Beijing was estimated to be 18 731(95%CI:10 106-27 354) from 2011 to 23 May 2015.Conclusions Winter is the season of viral diarrhea for children aged <5 years.The adjusted Serfling regression model analysis suggested that close attention should be paid to the etiologic variation of viruses causing acute gastroenteritis,especially the etiologic variation of norovirus.%目的 构建调整Serf ling回归模型,估计北京市2011年至2015年5月23日病毒性腹泻相关<5岁儿童超额腹泻病例数.方法 利用北京市2家儿童专科医院<5岁儿童急性腹泻周就诊病例数,拟合调整Serf ling回归模型,估计其病毒性腹泻相关超额病例数.结果 北京市2011年第8~10周、40~ 42周,2012年第40~46周,2013年第43~49周,2014年第45周直到2015年第11

  3. Towards accurate emergency response behavior

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  4. Is the PAMELA Positron Excess Winos?

    Grajek, Phill; Phalen, Dan; Pierce, Aaron; Watson, Scott

    2008-01-01

    Recently the PAMELA satellite-based experiment reported an excess of galactic positrons that could be a signal of annihilating dark matter. The PAMELA data may admit an interpretation as a signal from a wino-like LSP of mass about 200 GeV, normalized to the local relic density, and annihilating mainly into W-bosons. This possibility requires the current conventional estimate for the energy loss rate of positrons be too large by roughly a factor of five. Data from anti-protons and gamma rays also provide tension with this interpretation, but there are significant astrophysical uncertainties associated with their propagation. It is not unreasonable to take this well-motivated candidate seriously, at present, in part because it can be tested in several ways soon. The forthcoming PAMELA data on higher energy positrons and the FGST (formerly GLAST) data, should provide important clues as to whether this scenario is correct. If correct, the wino interpretation implies a cosmological history in which the dark matter...

  5. Association between regular exercise and excessive newborn birth weight

    Owe, Katrine M.; Nystad, Wenche; Bø, Kari

    2009-01-01

    OBJECTIVE: To estimate the association between regular exercise before and during pregnancy and excessive newborn birth weight. METHODS: Using data from the Norwegian Mother and Child Cohort Study, 36,869 singleton pregnancies lasting at least 37 weeks were included. Information on regular exercise was based on answers from two questionnaires distributed in pregnancy weeks 17 and 30. Linkage to the Medical Birth Registry of Norway provided data on newborn birth weight. The main outcome me...

  6. Age at menarche in schoolgirls with and without excess weight

    Silvia D. Castilho; Luciana B. Nucci

    2015-01-01

    OBJECTIVE: To evaluate the age at menarche of girls, with or without weight excess, attending private and public schools in a city in Southeastern Brazil. METHODS: This was a cross-sectional study comparing the age at menarche of 750 girls from private schools with 921 students from public schools, aged between 7 and 18 years. The menarche was reported by the status quo method and age at menarche was estimated by logarithmic transformation. The girls were grouped according to body mass index ...

  7. Initial report on characterization of excess highly enriched uranium

    NONE

    1996-07-01

    DOE`s Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path.

  8. Factors associated with excessive polypharmacy in older people

    Walckiers, Denise; Van der Heyden, Johan; Tafforeau, Jean

    2015-01-01

    Background Older people are a growing population. They live longer, but often have multiple chronic diseases. As a consequence, they are taking many different kind of medicines, while their vulnerability to pharmaceutical products is increased. The objective of this study is to describe the medicine utilization pattern in people aged 65 years and older in Belgium, and to estimate the prevalence and the determinants of excessive polypharmacy. Methods Data were used from the Belgian Health Inte...

  9. Initial report on characterization of excess highly enriched uranium

    DOE's Office of Fissile Materials Disposition assigned to this Y-12 division the task of preparing a report on the 174.4 metric tons of excess highly enriched U. Characterization included identification by category, gathering existing data (assay), defining the likely needed processing steps for prepping for transfer to a blending site, and developing a range of preliminary cost estimates for those steps. Focus is on making commercial reactor fuel as a final disposition path

  10. Damages and Expected Deaths Due to Excess NOx Emissions from 2009 to 2015 Volkswagen Diesel Vehicles.

    Holland, Stephen P; Mansur, Erin T; Muller, Nicholas Z; Yates, Andrew J

    2016-02-01

    We estimate the damages and expected deaths in the United States due to excess emissions of NOx from 2009 to 2015 Volkswagen diesel vehicles. Using data on vehicle registrations and a model of pollution transport and valuation, we estimate excess damages of $430 million and 46 excess expected deaths. Accounting for uncertainty about emissions gives a range for damages from $350 million to $500 million, and a range for excess expected deaths from 40 to 52. Our estimates incorporate significant local heterogeneity: for example, Minneapolis has the highest damages despite having fewer noncompliant vehicles than 13 other cities. Our estimated damages greatly exceed possible benefits from reduced CO2 emissions due to increased fuel economy. PMID:26720281

  11. Excess Returns and Systemic Risk for Chile and Mexico Excess Returns and Systemic Risk for Chile and Mexico

    McNelis, Paul D.; Guay C. Lim

    2000-01-01

    This paper is concerned with excess returns in the equity markets and the evolution of systemic risk in Chile and Mexico during the years 1989-1998, a period of financial openness, policy reform and crisis. A time varying generalised autoregressive conditional heteroscedastic in mean framework is used to estimate progressively more complex models of risk. They include the univariate own volatily model, the bivariate market pricing model, and the trivariate intertemporal asset pricing model. T...

  12. Factors Associated With High Sodium Intake Based on Estimated 24-Hour Urinary Sodium Excretion

    Hong, Jae Won; Noh, Jung Hyun; Kim, Dong-Jun

    2016-01-01

    Abstract Although reducing dietary salt consumption is the most cost-effective strategy for preventing progression of cardiovascular and renal disease, policy-based approaches to monitor sodium intake accurately and the understanding factors associated with excessive sodium intake for the improvement of public health are lacking. We investigated factors associated with high sodium intake based on the estimated 24-hour urinary sodium excretion, using data from the 2009 to 2011 Korea National H...

  13. Excessive libido in a woman with rabies.

    Dutta, J. K.

    1996-01-01

    Rabies is endemic in India in both wildlife and humans. Human rabies kills 25,000 to 30,000 persons every year. Several types of sexual manifestations including excessive libido may develop in cases of human rabies. A laboratory proven case of rabies in an Indian woman who manifested excessive libido is presented below. She later developed hydrophobia and died.

  14. Excessive internet use among European children

    Smahel, David; Helsper, Ellen; Green, Lelia; Kalmus, Veronika; Blinka, Lukas; Ólafsson, Kjartan

    2012-01-01

    This report presents new findings and further analysis of the EU Kids Online 25 country survey regarding excessive use of the internet by children. It shows that while a number of children (29%) have experienced one or more of the five components associated with excessive internet use, very few (1%) can be said to show pathological levels of use.

  15. The excessively crying infant : etiology and treatment

    Akhnikh, S.; Engelberts, A.C.; Sleuwen, B.E. van; Hoir, M.P. L’; Benninga, M.A.

    2014-01-01

    Excessive crying, often described as infantile colic, is the cause of 10% to 20% of all early pediatrician visits of infants aged 2 weeks to 3 months. Although usually benign and selflimiting, excessive crying is associated with parental exhaustion and stress. However, and underlying organic cause i

  16. Effects of excess ground ice on projections of permafrost in a warming climate

    In permafrost soils, ‘excess ice’, also referred to as ground ice, exists in amounts exceeding soil porosity in forms such as ice lenses and wedges. Here, we incorporate a simple representation of excess ice in the Community Land Model (CLM4.5) to investigate how excess ice affects projected permafrost thaw and associated hydrologic responses. We initialize spatially explicit excess ice obtained from the Circum-Arctic Map of Permafrost and Ground-Ice Conditions. The excess ice in the model acts to slightly reduce projected soil warming by about 0.35 °C by 2100 in a high greenhouse gas emissions scenario. The presence of excess ice slows permafrost thaw at a given location with about a 10 year delay in permafrost thaw at 3 m depth at most high excess ice locations. The soil moisture response to excess ice melt is transient and depends largely on the timing of thaw with wetter/saturated soil moisture conditions persisting slightly longer due to delayed post-thaw drainage. Based on the model projections of excess ice melt, we can estimate spatially explicit gridcell mean surface subsidence with values ranging up to 0.5 m by 2100 depending on the initial excess ice content and the extent of melt. (letter)

  17. Thermophysical and excess properties of hydroxamic acids in DMSO

    Graphical abstract: Excess molar volumes (VE) vs mole fraction (x2) of (A) N-o-tolyl-2-nitrobenzo- and (B) N-o-tolyl-4-nitrobenzo-hydroxamic acids in DMSO at different temperatures: ■, 298.15 K; ▪, 303.15 K; ▪, 308.15 K; ▪, 313.15 K; and ▪, 318.15 K. Highlights: ► ρ, n of the system hydroxamic acids in DMSO are reported. ► Apparent molar volume indicates superior solute–solvent interactions. ► Limiting apparent molar expansibility and coefficient of thermal expansion. ► Behaviour of this parameter suggest to hydroxamic acids act as structure maker. ► The excess properties have interpreted in terms of molecular interactions. -- Abstract: In this work, densities (ρ) and refractive indices (n) of N-o-tolyl-2-nitrobenzo- and N-o-tolyl-4-nitrobenzo-, hydroxamic acids have been determined for dimethyl sulfoxide (DMSO) as a function of their concentrations at T = (298.15, 303.15, 308.15, 313.15, and 318.15) K. These measurements were carried out to evaluate some important parameters, viz, molar volume (V), apparent molar volume (Vϕ), limiting apparent molar volume (Vϕ0), slope (SV∗), molar refraction (RM) and polarizability (α). The related parameters determined are limiting apparent molar expansivity (ϕE0), thermal expansion coefficient (α2) and the Hepler constant (∂2Vϕ0/∂T2). Excess properties such as excess molar volume (VE), deviations from the additivity rule of refractive index (nE), excess molar refraction (RME) have also been evaluated. The excess properties were fitted to the Redlich–Kister equations to estimate their coefficients and standard deviations were determined. The variations of these excess parameters with composition were discussed from the viewpoint of intermolecular interactions in these solutions. The excess properties are found to be either positive or negative depending on the molecular interactions and the nature of solutions. Further, these parameters have been interpreted in terms of solute

  18. Genetics Home Reference: aromatase excess syndrome

    ... males, the increased aromatase and subsequent conversion of androgens to estrogen are responsible for the gynecomastia and limited bone growth characteristic of aromatase excess syndrome . Increased estrogen in females can cause symptoms ...

  19. Controlling police (excessive force: The American case

    Zakir Gül

    2013-09-01

    Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.

  20. Accurate calculation of diffraction-limited encircled and ensquared energy.

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  1. Romanian welfare state between excess and failure

    Cristina Ciuraru-Andrica

    2012-12-01

    Full Text Available Timely or not, our issue can bring back to life some prolific discussions, sometimes diametrical. We strike the social assistance, where, at this moment, is still uncertain if, once unleashed the excess, the failure will come inevitably or there is a “Salvation Ark”. However, the difference between the excess and the failure of the welfare state is almost intangible, the reason of his potential failure being actually the abuses made until the start of depression.

  2. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  3. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  4. Accurate ab initio spin densities

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  5. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  6. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  7. Magma chambers: Formation, local stresses, excess pressures, and compartments

    Gudmundsson, Agust

    2012-09-01

    -situ tensile strength of the host rock, 0.5-9 MPa. These in-situ strength estimates are based on hydraulic fracture measurements in drill-holes worldwide down to crustal depths of about 9 km. These measurements do not support some recent magma-chamber stress models that predict (a) extra gravity-related wall-parallel stresses at the boundaries of magma chambers and (b) magma-chamber excess pressures prior to rupture of as much as hundreds of mega-pascals, particularly at great depths. General stress models of magma chambers are of two main types: analytical and numerical. Earlier analytical models were based on a nucleus-of-strain source (a 'point pressure source') for the magma chamber, and have been very useful for rough estimates of magma-chamber depths from surface deformation during unrest periods. More recent models assume the magma chamber to be axisymmetric ellipsoids or, in two-dimensions, ellipses of various shapes. Nearly all these models use the excess pressure in the chamber as the only loading (since lithostatic stress effects are then automatically taken into account), assume the chamber to be totally molten, and predict similar local stress fields. The predicted stress fields are generally in agreement with the world-wide stress measurements in drill-holes and, in particular, with the in-situ tensile-strength estimates. Recent numerical models consider magma-chambers of various (ideal) shapes and sizes in relation to their depths below the Earth's surface. They also take into account crustal heterogeneities and anisotropies; in particular the effects of the effects of a nearby free surface and horizontal and inclined (dipping) mechanical layering. The results show that the free surface may have strong effects on the local stresses if the chamber is comparatively close to the surface. The mechanical layering, however, may have even stronger effects. For realistic layering, and other heterogeneities, the numerical models predict complex local stresses around

  8. Towards an accurate bioimpedance identification

    Sánchez Terrones, Benjamín; Louarroudi, E.; Bragós Bardia, Ramon; Pintelon, Rik

    2013-01-01

    This paper describes the local polynomial method (LPM) for estimating the time- invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identi cation framework and compare it with the traditional cross and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both t...

  9. Towards an accurate bioimpedance identification

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  10. Towards an accurate bioimpedance identification

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  11. ESTIMATING IRRIGATION COSTS

    Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...

  12. New vector bosons and the diphoton excess

    Jorge de Blas

    2016-08-01

    Full Text Available We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely explained in a simplified model containing just φ and W′. On the other hand, if W′ does not carry color, we show that, provided additional colored particles exist to generate the required φ to gluon coupling, the diphoton excess could be explained by the same W′ commonly invoked to explain the diboson excess at ∼2 TeV. We also explore possible connections between the diphoton and diboson excesses with the anomalous tt¯ forward–backward asymmetry.

  13. Antidepressant induced excessive yawning and indifference

    Bruno Palazzo Nazar

    2015-03-01

    Full Text Available Introduction Antidepressant induced excessive yawning has been described as a possible side effect of pharmacotherapy. A syndrome of indifference has also been described as another possible side effect. The frequency of those phenomena and their physiopathology are unknown. They are both considered benign and reversible after antidepressant discontinuation but severe cases with complications as temporomandibular lesions, have been described. Methods We report two unprecedented cases in which excessive yawning and indifference occurred simultaneously as side effects of antidepressant therapy, discussing possible physiopathological mechanisms for this co-occurrence. Case 1: A male patient presented excessive yawning (approximately 80/day and apathy after venlafaxine XR treatment. Symptoms reduced after a switch to escitalopram, with a reduction to 50 yawns/day. Case 2: A female patient presented excessive yawning (approximately 25/day and inability to react to environmental stressors with desvenlafaxine. Conclusion Induction of indifference and excessive yawning may be modulated by serotonergic and noradrenergic mechanisms. One proposal to unify these side effects would be enhancement of serotonin in midbrain, especially paraventricular and raphe nucleus.

  14. Phenomenology and psychopathology of excessive indoor tanning.

    Petit, Aymeric; Karila, Laurent; Chalmin, Florence; Lejoyeux, Michel

    2014-06-01

    Excessive indoor tanning, defined by the presence of an impulse towards and repetition of tanning that leads to personal distress, has only recently been recognized as a psychiatric disorder. This finding is based on the observations of many dermatologists who report the presence of addictive relationships with tanning salons among their patients despite being given diagnoses of malignant melanoma. This article synthesizes the existing literature on excessive indoor tanning and addiction to investigate possible associations. This review focuses on the prevalence, clinical features, etiology, and treatment of this disorder. A literature review was conducted, using PubMed, Google Scholar, EMBASE and PsycINFO, to identify articles published in English from 1974 to 2013. Excessive indoor tanning may be related to addiction, obsessive-compulsive disorder, impulse control disorder, seasonal affective disorder, anorexia, body dysmorphic disorder, or depression. Excessive indoor tanning can be included in the spectrum of addictive behavior because it has clinical characteristics in common with those of classic addictive disorders. It is frequently associated with anxiety, eating disorders, and tobacco dependence. Further controlled studies are required, especially in clinical psychopathology and neurobiology, to improve our understanding of excessive indoor tanning. PMID:24601904

  15. Accurate calibration of stereo cameras for machine vision

    Li, Liangfu; Feng, Zuren; Feng, Yuanjing

    2004-01-01

    Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...

  16. Accurate adiabatic correction in the hydrogen molecule

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  17. Same-Sign Dilepton Excesses and Vector-like Quarks

    Chen, Chuan-Ren; Low, Ian

    2015-01-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b-prime quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  18. Same-sign dilepton excesses and vector-like quarks

    Chen, Chuan-Ren; Cheng, Hsin-Chia; Low, Ian

    2016-03-01

    Multiple analyses from ATLAS and CMS collaborations, including searches for ttH production, supersymmetric particles and vector-like quarks, observed excesses in the same-sign dilepton channel containing b-jets and missing transverse energy in the LHC Run 1 data. In the context of little Higgs theories with T parity, we explain these excesses using vector-like T-odd quarks decaying into a top quark, a W boson and the lightest T-odd particle (LTP). For heavy vector-like quarks, decay topologies containing the LTP have not been searched for at the LHC. The bounds on the masses of the T-odd quarks can be estimated in a simplified model approach by adapting the search limits for top/bottom squarks in supersymmetry. Assuming a realistic decay branching fraction, a benchmark with a 750 GeV T-odd b' quark is proposed. We also comment on the possibility to fit excesses in different analyses in a common framework.

  19. The Origin of the 24-micron Excess in Red Galaxies

    Brand, Kate; Armus, Lee; Assef, Roberto J; Brown, Michael J I; Cool, Richard R; Desai, Vandana; Dey, Arjun; Floc'h, Emeric Le; Jannuzi, Buell T; Kochanek, Christopher S; Melbourne, Jason; Papovich, Casey J; Soifer, B T

    2008-01-01

    Observations with the Spitzer Space Telescope have revealed a population of red-sequence galaxies with a significant excess in their 24-micron emission compared to what is expected from an old stellar population. We identify 900 red galaxies with 0.150.3 mJy). We determine the prevalence of AGN and star-formation activity in all the AGES galaxies using optical line diagnostics and mid-IR color-color criteria. Using the IRAC color-color diagram from the Spitzer Shallow Survey, we find that 64% of the 24-micron excess red galaxies are likely to have strong PAH emission features in the 8-micron IRAC band. This fraction is significantly larger than the 5% of red galaxies with f24<0.3 mJy that are estimated to have strong PAH emission, suggesting that the infrared emission is largely due to star-formation processes. Only 15% of the 24-micron excess red galaxies have optical line diagnostics characteristic of star-formation (64% are classified as AGN and 21% are unclassifiable). The difference between the optica...

  20. A More Accurate Fourier Transform

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  1. Minimal Dilaton Model and the Diphoton Excess

    Agarwal, Bakul; Mohan, Kirtimaan A

    2016-01-01

    In light of the recent 750 GeV diphoton excesses reported by the ATLAS and CMS collaborations, we investigate the possibility of explaining this excess using the Minimal Dilaton Model. We find that this model is able to explain the observed excess with the presence of additional top partner(s), with same charge as the top quark, but with mass in the TeV region. First, we constrain model parameters using in addition to the 750 GeV diphoton signal strength, precision electroweak tests, single top production measurements, as well as Higgs signal strength data collected in the earlier runs of the LHC. In addition we discuss interesting phenomenolgy that could arise in this model, relevant for future runs of the LHC.

  2. Excessive crying in infants with regulatory disorders.

    Maldonado-Duran, M; Sauceda-Garcia, J M

    1996-01-01

    The authors point out a correlation between regulatory disorders in infants and the problem of excessive crying. The literature describes other behavioral problems involving excessive crying in very young children, but with little emphasis on this association. The recognition and diagnosis of regulatory disorders in infants who cry excessively can help practitioners design appropriate treatment interventions. Understanding these conditions can also help parents tailor their caretaking style, so that they provide appropriate soothing and stimulation to their child. In so doing, they will be better able to develop and preserve a satisfactory parent-child relationship, as well as to maintain their own sense of competence and self-esteem as parents. PMID:8742673

  3. Singlet Scalar Resonances and the Diphoton Excess

    McDermott, Samuel D; Ramani, Harikrishnan

    2015-01-01

    ATLAS and CMS recently released the first results of searches for diphoton resonances in 13 TeV data, revealing a modest excess at an invariant mass of approximately 750 GeV. We find that it is generically possible that a singlet scalar resonance is the origin of the excess while avoiding all other constraints. We highlight some of the implications of this model and how compatible it is with certain features of the experimental results. In particular, we find that the very large total width of the excess is difficult to explain with loop-level decays alone, pointing to other interesting bounds and signals if this feature of the data persists. Finally we comment on the robust Z-gamma signature that will always accompany the model we investigate.

  4. Effects of Excess Cash, Board Attributes and Insider Ownership on Firm Value: Evidence from Pakistan

    Nadeem Ahmed Sheikh; Muhammad Imran Khan

    2016-01-01

    The purpose of this paper is to investigate whether excess cash, board attributes (i.e. board size, board independence and CEO duality) and insider ownership affects the value of the firm. Data were taken from annual reports of non-financial firms listed on the Karachi Stock Exchange (KSE) Pakistan during 2008-2012. Pooled ordinary least squares method used to estimate the effects of excess cash and internal governance indicators on the value of the firm. Our results indicate that...

  5. Mortality attributable to excess body mass Index in Iran: Implementation of the comparative risk assessment methodology

    Shirin Djalalinia; Sahar Saeedi Moghaddam; Niloofar Peykari; Amir Kasaeian; Ali Sheidaei; Anita Mansouri; Younes Mohammadi; Mahboubeh Parsaeian; Parinaz Mehdipour; Bagher Larijani; Farshad Farzadfar

    2015-01-01

    Background: The prevalence of obesity continues to rise worldwide with alarming rates in most of the world countries. Our aim was to compare the mortality of fatal disease attributable to excess body mass index (BMI) in Iran in 2005 and 2011. Methods: Using standards implementation comparative risk assessment methodology, we estimated mortality attributable to excess BMI in Iranian adults of 25-65 years old, at the national and sub-national levels for 9 attributable outcomes including; is...

  6. Median ages at stages of sexual maturity and excess weight in school children

    Luciano, Alexandre P; Benedet, Jucemar; de Abreu, Luiz Carlos; Valenti, Vitor E; de Souza Almeida, Fernando; de Vasconcelos, Francisco AG; Adami, Fernando

    2013-01-01

    Background We aimed to estimate the median ages at specific stages of sexual maturity stratified by excess weight in boys and girls. Materials and method This was a cross-sectional study made in 2007 in Florianopolis, Brazil, with 2,339 schoolchildren between 8 to 14 years of age (1,107 boys) selected at random in two steps (by region and type of school). The schoolchildren were divided into: i) those with excess weight and ii) those without excess weight, according to the WHO 2007 cut-off po...

  7. New vector bosons and the diphoton excess

    Jorge de Blas; José Santiago; Roberto Vega-Morales(Laboratoire de Physique Théorique d’Orsay, UMR8627-CNRS, Université Paris-Sud 11, F-91405 Orsay Cedex, France)

    2016-01-01

    We consider the possibility that the recently observed diphoton excess at ∼750 GeV can be explained by the decay of a scalar particle (φ) to photons. If the scalar is the remnant of a symmetry-breaking sector of some new gauge symmetry, its coupling to photons can be generated by loops of the charged massive vectors of the broken symmetry. If these new W′ vector bosons carry color, they can also generate an effective coupling to gluons. In this case the diphoton excess could be entirely expla...

  8. Diphoton Excess as a Hidden Monopole

    Yamada, Masaki; Yonekura, Kazuya

    2016-01-01

    We provide a theory with a monopole of a strongly-interacting hidden U(1) gauge symmetry that can explain the 750-GeV diphoton excess reported by ATLAS and CMS. The excess results from the resonance of monopole, which is produced via gluon fusion and decays into two photons. In the low energy, there are only mesons and a monopole in our model because any baryons cannot be gauge invariant in terms of strongly interacting Abelian symmetry. This is advantageous of our model because there is no unwanted relics around the BBN epoch.

  9. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  10. Infrared excesses in early-type stars - Gamma Cassiopeiae

    Scargle, J. D.; Erickson, E. F.; Witteborn, F. C.; Strecker, D. W.

    1978-01-01

    Spectrophotometry of the classical Be star Gamma Cas (1-4 microns, with about 2% spectral resolution) is presented. These data, together with existing broad-band observations, are accurately described by simple isothermal LTE models for the IR excess which differ from most previously published work in three ways: (1) hydrogenic bound-free emission is included; (2) the attenuation of the star by the shell is included; and (3) no assumption is made that the shell contribution is negligible in some bandpass. It is demonstrated that the bulk of the IR excess consists of hydrogenic bound-free and free-free emission from a shell of hot ionized hydrogen gas, although a small thermal component cannot be ruled out. The bound-free emission is strong, and the Balmer, Paschen, and Brackett discontinuities are correctly represented by the shell model with physical parameters as follows: a shell temperature of approximately 18,000 K, an optical depth (at 1 micron) of about 0.5, an electron density of approximately 1 trillion per cu cm, and a size of about 2 trillion cm. Phantom shells (i.e., ones which do not alter the observed spectrum of the underlying star) are discussed.

  11. Burden of Growth Hormone Deficiency and Excess in Children.

    Fideleff, Hugo L; Boquete, Hugo R; Suárez, Martha G; Azaretzky, Miriam

    2016-01-01

    Longitudinal growth results from multifactorial and complex processes that take place in the context of different genetic traits and environmental influences. Thus, in view of the difficulties in comprehension of the physiological mechanisms involved in the achievement of normal height, our ability to make a definitive diagnosis of GH impairment still remains limited. There is a myriad of controversial aspects in relation to GH deficiency, mainly related to diagnostic controversies and advances in molecular biology. This might explain the diversity in therapeutic responses and may also serve as a rationale for new "nonclassical" treatment indications for GH. It is necessary to acquire more effective tools to reach an adequate evaluation, particularly while considering the long-term implications of a correct diagnosis, the cost, and safety of treatments. On the other hand, overgrowth constitutes a heterogeneous group of different pathophysiological situations including excessive somatic and visceral growth. There are overlaps in clinical and molecular features among overgrowth syndromes, which constitute the real burden for an accurate diagnosis. In conclusion, both GH deficiency and overgrowth are a great dilemma, still not completely solved. In this chapter, we review the most burdensome aspects related to short stature, GH deficiency, and excess in children, avoiding any details about well-known issues that have been extensively discussed in the literature. PMID:26940390

  12. Low excess air operations of oil boilers

    Butcher, T.A.; Celebi, Y.; Litzke, Wai Lin [Brookhaven National Labs., Upton, NY (United States)

    1997-09-01

    To quantify the benefits which operation at very low excess air operation may have on heat exchanger fouling BNL has recently started a test project. The test allows simultaneous measurement of fouling rate, flue gas filterable soot, flue gas sulfuric acid content, and flue gas sulfur dioxide.

  13. ORIGIN OF EXCESS (176)Hf IN METEORITES

    Thrane, Kristine; Connelly, James Norman; Bizzarro, Martin;

    2010-01-01

    After considerable controversy regarding the (176)Lu decay constant (lambda(176)Lu), there is now widespread agreement that (1.867 +/- 0.008) x 10(-11) yr(-1) as confirmed by various terrestrial objects and a 4557 Myr meteorite is correct. This leaves the (176)Hf excesses that are correlated with...

  14. 34 CFR 668.166 - Excess cash.

    2010-07-01

    ... funds that an institution receives from the Secretary under the just-in-time payment method. (b) Excess...; and (2) Providing funds to the institution under the reimbursement payment method or cash monitoring payment method described in § 668.163(d) and (e), respectively. (Authority: 20 U.S.C. 1094)...

  15. 43 CFR 426.12 - Excess land.

    2010-10-01

    ... irrigation water by entering into a recordable contract with the United States if the landowner qualifies... irrigation water because the landowner becomes subject to the discretionary provisions as provided in § 426.3... this section; or (iii) The excess land becomes eligible to receive irrigation water as a result...

  16. Excessive prices as abuse of dominance?

    la Cour, Lisbeth; Møllgaard, Peter

    2007-01-01

    firm abused its position by charging excessive prices. We also test whether tightening of the Danish competition act has altered the pricing behaviour on the market. We discuss our results in the light of a Danish competition case against the dominant cement producer that was abandoned by the authority...

  17. Software Cost Estimation Review

    Ongere, Alphonce

    2013-01-01

    Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...

  18. 38 CFR 4.46 - Accurate measurement.

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV. PMID:26736434

  20. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every catchment of...

  1. Fast and Provably Accurate Bilateral Filtering.

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  2. Quark Seesaw Vectorlike Fermions and Diphoton Excess

    Dev, P S Bhupal; Zhang, Yongchao

    2015-01-01

    We present a possible interpretation of the recent diphoton excess reported by the $\\sqrt s=13$ TeV LHC data in quark seesaw left-right models with vectorlike fermions proposed to solve the strong $CP$ problem without the axion. The gauge singlet real scalar field responsible for the mass of the vectorlike fermions has the right production cross section and diphoton branching ratio to be identifiable with the reported excess at around 750 GeV diphoton invariant mass. Various ways to test this hypothesis as more data accumulates at the LHC are proposed. In particular, we find that for our interpretation to work, there is an upper limit on the right-handed scale $v_R$, which depends on the Yukawa coupling of singlet Higgs field to the vectorlike fermions.

  3. Excess plutonium disposition: The deep borehole option

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified

  4. Scalar explanation of diphoton excess at LHC

    Han, Huayong; Wang, Shaoming; Zheng, Sibo

    2016-06-01

    Inspired by the diphoton signal excess observed in the latest data of 13 TeV LHC, we consider either a 750 GeV real scalar or pseudo-scalar responsible for this anomaly. We propose a concrete vector-like quark model, in which the vector-like fermion pairs directly couple to this scalar via Yukawa interaction. For this setting the scalar is mainly produced via gluon fusion, then decays at the one-loop level to SM diboson channels gg , γγ , ZZ , WW. We show that for the vector-like fermion pairs with exotic electric charges, such model can account for the diphoton excess and is consistent with the data of 8 TeV LHC simultaneously in the context of perturbative analysis.

  5. Excess plutonium disposition: The deep borehole option

    Ferguson, K.L.

    1994-08-09

    This report reviews the current status of technologies required for the disposition of plutonium in Very Deep Holes (VDH). It is in response to a recent National Academy of Sciences (NAS) report which addressed the management of excess weapons plutonium and recommended three approaches to the ultimate disposition of excess plutonium: (1) fabrication and use as a fuel in existing or modified reactors in a once-through cycle, (2) vitrification with high-level radioactive waste for repository disposition, (3) burial in deep boreholes. As indicated in the NAS report, substantial effort would be required to address the broad range of issues related to deep bore-hole emplacement. Subjects reviewed in this report include geology and hydrology, design and engineering, safety and licensing, policy decisions that can impact the viability of the concept, and applicable international programs. Key technical areas that would require attention should decisions be made to further develop the borehole emplacement option are identified.

  6. A scalar hint from the diboson excess?

    Cacciapaglia, Giacomo; Hashimoto, Michio

    2015-01-01

    In view of the recent diboson resonant excesses reported by both ATLAS and CMS, we suggest that a new weak singlet pseudo-scalar particle, $\\eta_{WZ}$, may decay into two weak bosons while being produced in gluon fusion at the LHC. The couplings to gauge bosons can arise as in the case of a Wess-Zumino-Witten anomaly term, thus providing an effective model in which the present observed excess in the diboson channel at the LHC can be studied in a well motivated phenomenological model. In models where the pseudo-scalar arises as a composite state, the coefficients of the anomalous couplings can be related to the fermion components of the underlying dynamics. We provide an example to test the feasibility of the idea.

  7. Excess credit and the South Korean crisis

    Panicos O. Demetriades; Fattouh, Bassam A.

    2006-01-01

    We provide a novel empirical analysis of the South Korean credit market that reveals large volumes of excess credit since the late 1970s, indicating that a sizeable proportion of total credit was being used to refinance unprofitable projects. Our findings are consistent with theoretical literature that suggests that soft budget constraints and over-borrowing were significant factors behind the Korean financial crisis of 1997-98.

  8. IDENTIFICATION OF RIVER WATER EXCESSIVE POLLUTION SOURCES

    K. J. KACHIASHVILI; D. I. MELIKDZHANIAN

    2006-01-01

    The program package for identification of river water excessive pollution sources located between two controlled cross-sections of the river is described in this paper. The software has been developed by the authors on the basis of mathematical models of pollutant transport in the rivers and statistical hypotheses checking methods. The identification algorithms were elaborated with the supposition that the pollution sources discharge different compositions of pollutants or (at the identical c...

  9. Pharmacotherapy of Excessive Sleepiness: Focus on Armodafinil

    Michael Russo

    2009-01-01

    Excessive sleepiness (ES) is responsible for significant morbidity and mortality due to its association with cardiovascular disease, cognitive impairment, and occupational and transport accidents. ES is also detrimental to patients’ quality of life, as it affects work and academic performance, social interactions, and personal relationships. Armodafinil is the R-enantiomer of the established wakefulness-promoting agent modafinil, which is a racemic mixture of both the R- and S-enantiomers. R-...

  10. Excess Higgs Production in Neutralino Decays

    Howe, Kiel; Saraswat, Prashant

    2012-01-01

    The ATLAS and CMS experiments have recently claimed discovery of a Higgs boson-like particle at ~5 sigma confidence and are beginning to test the Standard Model predictions for its production and decay. In a variety of supersymmetric models, a neutralino NLSP can decay dominantly to the Higgs and the LSP. In natural SUSY models, a light third generation squark decaying through this chain can lead to large excess Higgs production while evading existing BSM searches. Such models can be observed...

  11. Contrast induced hyperthyroidism due to iodine excess

    Mushtaq, Usman; Price, Timothy; Laddipeerla, Narsing; Townsend, Amanda; Broadbridge, Vy

    2009-01-01

    Iodine induced hyperthyroidism is a thyrotoxic condition caused by exposure to excessive iodine. Historically this type of hyperthyroidism has been described in areas of iodine deficiency. With advances in medicine, iodine induced hyperthyroidism has been observed following the use of drugs containing iodine—for example, amiodarone, and contrast agents used in radiological imaging. In elderly patients it is frequently difficult to diagnose and control contrast related hyperthyroidism, as most...

  12. Leptophilic dark matter in Galactic Center excess

    Lu, Bo-Qiang; Zong, Hong-Shi

    2016-04-01

    Herein we explore the possibility of explaining a gamma-ray excess in the Galactic Center with the dark matter scenario. After taking into account the constraints from both the AMS-02 experiment and the gamma-ray observation on dwarf spheroidal satellite galaxies in Fermi-LAT, we find that the τ lepton channel is the only permissive channel for the interpretation of the Galaxy center excess. Tau leptophilic dark matter provides a well-motivated framework in which the dark matter can dominantly couple to τ lepton at tree-level. We describe the interactions with a general effective field theory approach by using higher-dimensional operators, and this approach provides for a model independent analysis. We consider the constraints from the measurement of the DM relic density in the Planck experiment and the AMS-02 cosmic rays experiment, and find that most of the interaction operators except O7 , O9 and O12 have been excluded. Due to the quantum fluctuations, even in such a scenario there are loop induced dark matter-nucleon interactions. We calculate the dark matter-nucleon scattering cross-section at loop-level, and if the limits on the dark matter-nucleon scattering cross-section from direct detection experiments are also taken into account, we find that the operators remaining available for accounting for the Galaxy center excess are O9 and O12.

  13. Negative excess noise in gated quantum wires

    The electrical current noise of a quantum wire is expected to increase with increasing applied voltage. We show that this intuition can be wrong. Specifically, we consider a single channel quantum wire with impurities and with a capacitive coupling to a metallic gate, and find that its excess noise, defined as the change in the noise caused by the finite voltage, can be negative at zero temperature. This feature is present both for large (c>>cq) and small (cq) capacitive coupling, where c is the geometrical and cq the quantum capacitance of the wire. In particular, for c>>cq, negativity of the excess noise can occur at finite frequency when the transmission coefficients are energy dependent, i.e. in the presence of Fabry-Perot resonances or band curvature. In the opposite regime c q, a non trivial voltage dependence of the noise arises even for energy independent transmission coefficients: at zero frequency the noise decreases with voltage as a power law when cq/3, while, at finite frequency, regions of negative excess noise are present due to Andreev-type resonances.

  14. Search for bright stars with infrared excess

    Bright stars, stars with visual magnitude smaller than 6.5, can be studied using small telescope. In general, if stars are assumed as black body radiator, then the color in infrared (IR) region is usually equal to zero. Infrared data from IRAS observations at 12 and 25μm (micron) with good flux quality are used to search for bright stars (from Bright Stars Catalogues) with infrared excess. In magnitude scale, stars with IR excess is defined as stars with IR color m12−m25>0; where m12−m25 = −2.5log(F12/F25)+1.56, where F12 and F25 are flux density in Jansky at 12 and 25μm, respectively. Stars with similar spectral type are expected to have similar color. The existence of infrared excess in the same spectral type indicates the existence of circum-stellar dust, the origin of which is probably due to the remnant of pre main-sequence evolution during star formation or post AGB evolution or due to physical process such as the rotation of those stars

  15. Effects of Excess Cash, Board Attributes and Insider Ownership on Firm Value: Evidence from Pakistan

    Nadeem Ahmed Sheikh

    2016-04-01

    Full Text Available The purpose of this paper is to investigate whether excess cash, board attributes (i.e. board size, board independence and CEO duality and insider ownership affects the value of the firm. Data were taken from annual reports of non-financial firms listed on the Karachi Stock Exchange (KSE Pakistan during 2008-2012. Pooled ordinary least squares method used to estimate the effects of excess cash and internal governance indicators on the value of the firm. Our results indicate that excess cash is significantly negatively related to firm value. Excess cash along with board size is significant and negatively related to firm value. Excess cash along with insider ownership is significant and negatively related to firm value. Control variables namely leverage and dividends are positively while firm size is negatively related to firm value in all regressions. In sum, empirical results indicate that excess cash, board size and insider ownership have material effects on the value of the firm. Moreover, findings of this study provide support to managers to understand the impact of excess cash and internal governance measures on firm value. In addition, findings provide support to regulatory authorities to frame regulations that improve the level of corporate governance in the country.

  16. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions

    Edward Khawam; Bachir Abiad; Alaa Boughannam; Joanna Saade; Ramzi Alameddine

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies ...

  17. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  18. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.

    Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan

    2015-01-01

    Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681

  19. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  20. Identifying opportunities to reduce excess nitrogen in croplands while maintaining current crop yields

    West, P. C.; Mueller, N. D.; Foley, J. A.

    2011-12-01

    Use of synthetic nitrogen fertilizer has greatly contributed to the increased crop yields brought about by the Green Revolution. Unfortunately, it also has also contributed to substantial excess nitrogen in the environment. Application of excess nitrogen not only is a waste of energy and other resources used to produce, transport and apply it, it also pollutes aquatic ecosystems and has led to the development of more than 200 hypoxic-or "dead"-zones in coastal areas around the world. How can we decrease use of excess nitrogen without compromising crop yields? To help address this challenge, our study (1) quantified hot spots of excess nitrogen, and (2) estimated how much nitrogen reduction is possible in these areas while still maintaining yields. We estimated excess nitrogen for major crops using a mass balance approach and global spatial data sets of crop area and yield, fertilizer application rates, and nitrogen deposition. Hot spots of excess nitrogen were identified by quantifying the smallest area within large river basins that contributed 25% and 50% of the total load within each basin. Nitrogen reduction scenarios were developed using a yield response model to estimate nitrogen application rates needed to maintain current yields. Our research indicated that excess nitrogen is concentrated in very small portions of croplands within river basins, with 25% of the total nitrogen load in each basin from ~10% of the cropland, and 50% of the total nitrogen load in each basin from ~25% of the cropland. Targeting reductions in application rates in these hot spots can allow us to maintain current crop yields while greatly reducing nitrogen loading to coastal areas and creating the opportunity to reallocate resources to boost yields on nitrogen-limited croplands elsewhere.

  1. Excess mortality associated with influenza epidemics in Portugal, 1980 to 2004.

    Baltazar Nunes

    Full Text Available BACKGROUND: Influenza epidemics have a substantial impact on human health, by increasing the mortality from pneumonia and influenza, respiratory and circulatory diseases, and all causes. This paper provides estimates of excess mortality rates associated with influenza virus circulation for 7 causes of death and 8 age groups in Portugal during the period of 1980-2004. METHODOLOGY/PRINCIPAL FINDINGS: We compiled monthly mortality time series data by age for all-cause mortality, cerebrovascular diseases, ischemic heart diseases, diseases of the respiratory system, chronic respiratory diseases, pneumonia and influenza. We also used a control outcome, deaths from injuries. Age- and cause-specific baseline mortality was modelled by the ARIMA approach; excess deaths attributable to influenza were calculated by subtracting expected deaths from observed deaths during influenza epidemic periods. Influenza was associated with a seasonal average of 24.7 all-cause excess deaths per 100,000 inhabitants, approximately 90% of which were among seniors over 65 yrs. Excess mortality was 3-6 fold higher during seasons dominated by the A(H3N2 subtype than seasons dominated by A(H1N1/B. High excess mortality impact was also seen in children under the age of four years. Seasonal excess mortality rates from all the studied causes of death were highly correlated with each other (Pearson correlation range, 0.65 to 0.95, P0.64, P<0.05. By contrast, there was no correlation with excess mortality from injuries. CONCLUSIONS/SIGNIFICANCE: Our excess mortality approach is specific to influenza virus activity and produces influenza-related mortality rates for Portugal that are similar to those published for other countries. Our results indicate that all-cause excess mortality is a robust indicator of influenza burden in Portugal, and could be used to monitor the impact of influenza epidemics in this country. Additional studies are warranted to confirm these findings in other

  2. [Excess weight and disability among the elderly in Argentina].

    Monteverde, Malena

    2015-12-01

    The aim of this paper is to analyze the relationship between excess weight and the condition of disability among elderly people in Argentina and to assess the extent to which a protective factor could be operating that reduces or mitigates the effect of overweight on the loss of functional skills in people over 64 years of age. In order to do so, microdata from Argentina's 2009 National Survey of Risk Factors [Encuesta Nacional de Factores de Riesgo] was utilized. To measure the association among overweight, obesity and disability status, as well as the interaction of weight status and age, logistic regression models were estimated. The results indicate that although overweight and obesity have a positive net effect on the occurrence of disabilities, this effect is lower among people 64 years of age and older. This result could be suggesting that among older people a protective factor is at work that, while not reversing the direct relationship between excess weight and disability, seems to attenuate it. PMID:26676594

  3. Excess-electron and excess-hole states of charged alkali halide clusters

    Honea, Eric C.; Homer, Margie L.; Whetten, R. L.

    1990-12-01

    Charged alkali halide clusters from a He-cooled laser vaporization source have been used to investigate two distinct cluster states corresponding to the excess-electron and excess-hole states of the crystal. The production method is UV-laser vaporization of an alkali metal rod into a halogen-containing He flow stream, resulting in variable cluster composition and cooling sufficient to stabilize weakly bound forms. Detection of charged clusters is accomplished without subsequent ionization by pulsed-field time-of-flight mass spectrometry of the skimmed cluster beam. Three types of positively charged sodium fluoride cluster are observed, each corresponding to a distinct physical situation: NanF+n-1 (purely ionic form), Nann+1F+n-1 (excess-electron form), and NanF+n (excess-hole form). The purely ionic clusters exhibit an abundance pattern similar to that observed in sputtering and fragmentation experiments and are explained by the stability of completed cubic microlattice structures. The excess-electron clusters, in contrast, exhibit very strong abundance maxima at n = 13 and 22, corresponding to the all-odd series (2n + 1 = jxkxl;j,k,l odd). Their high relative stability is explained by the ease of Na(0) loss except when the excess electron localizes in a lattice site to complete a cuboid structure. These may correspond to the internal F-center state predicted earlier. A localized electron model incorporating structural simulation results as account for the observed pattern. The excess-hole clusters, which had been proposed as intermediates in the ionization-induced fragmentation of neutral AHCs, exhibit a smaller variation in stability, indicating that the hole might not be well localized.

  4. Quirky explanations for the diphoton excess

    Curtin, David; Verhaaren, Christopher B.

    2016-03-01

    We propose two simple quirk models to explain the recently reported 750 GeV diphoton excesses at ATLAS and CMS. It is already well known that a real singlet scalar ϕ with Yukawa couplings ϕ X ¯X to vectorlike fermions X with mass mX>mϕ/2 can easily explain the observed signal, provided X carries both SM color and electric charge. We instead consider first the possibility that the pair production of a fermion, charged under both SM gauge groups and a confining S U (3 )v gauge group, is responsible. If pair produced it forms a quirky bound state, which promptly annihilates into gluons, photons, v-gluons and possibly SM fermions. This is an extremely minimal model to explain the excess, but is already in some tension with existing displaced searches, as well as dilepton and dijet resonance bounds. We therefore propose a hybrid quirk-scalar model, in which the fermion of the simple ϕ X ¯X toy model is charged under the additional S U (3 )v confining gauge group. Constraints on the new heavy fermion X are then significantly relaxed. The main additional signals of this model are possible dilepton, dijet and diphoton resonances at ˜2 TeV or more from quirk annihilation, and the production of v-glueballs through quirk annihilation and ϕ decay. The glueballs can give rise to spectacular signatures, including displaced vertices and events with leptons, photons and Z -bosons. If the quirk-scalar model is responsible for the 750 GeV excess it should be discovered in one of these channels with 20 or 300 fb-1 of LHC Run 2 data.

  5. Di-photon excess illuminates dark matter

    Backović, Mihailo; Mariotti, Alberto; Redigolo, Diego

    2016-01-01

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predic...

  6. Diboson Excess from a New Strong Force

    Georgi, Howard; Nakai, Yuichiro

    2016-01-01

    We explore a "partial unification" model that could explain the diphoton event excess around $750 \\, \\rm GeV$ recently reported by the LHC experiments. A new strong gauge group is combined with the ordinary color and hypercharge gauge groups. The VEV responsible for the combination is of the order of the $SU(2)\\times U(1)$ breaking scale, but the coupling of the new physics to standard model particles is suppressed by the strong interaction of the new gauge group. This simple extension of the...

  7. Subcorneal hematomas in excessive video game play.

    Lennox, Maria; Rizzo, Jason; Lennox, Luke; Rothman, Ilene

    2016-01-01

    We report a case of subcorneal hematomas caused by excessive video game play in a 19-year-old man. The hematomas occurred in a setting of thrombocytopenia secondary to induction chemotherapy for acute myeloid leukemia. It was concluded that thrombocytopenia subsequent to prior friction from heavy use of a video game controller allowed for traumatic subcorneal hemorrhage of the hands. Using our case as a springboard, we summarize other reports with video game associated pathologies in the medical literature. Overall, cognizance of the popularity of video games and related pathologies can be an asset for dermatologists who evaluate pediatric patients. PMID:26919354

  8. Quirky Explanations for the Diphoton Excess

    Curtin, David; Verhaaren, Christopher B.

    2015-01-01

    We propose two simple quirk models to explain the recently reported 750 GeV diphoton excesses at ATLAS and CMS. It is already well-known that a real singlet scalar $\\phi$ with Yukawa couplings $\\phi \\bar X X$ to vector-like fermions $X$ with mass $m_X > m_\\phi/2$ can easily explain the observed signal, provided $X$ carries both SM color and electric charge. We instead consider first the possibility that the pair production of a fermion, charged under both SM gauge groups and a confining $SU(3...

  9. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    Burdick, Arlan [IBACOS, Inc., Pittsburgh, PA (United States)

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  10. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  11. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Mark Shortis

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation a...

  12. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  13. A multiple more accurate Hardy-Littlewood-Polya inequality

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  14. An accurate and robust gyroscope-gased pedometer.

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  15. Connecting the LHC diphoton excess to the Galatic center gamma-ray excess

    Huang, Xian-Jun; Zhou, Yu-Feng

    2016-01-01

    The recent LHC Run-2 data have shown a possible excess in diphoton events, suggesting the existence of a new resonance $\\phi$ with mass $M\\sim 750$~GeV. If $\\phi$ plays the role of a portal particle connecting the Standard Model and the invisible dark sector, the diphoton excess should be correlated with another photon excess, namely, the excess in the diffuse gamma rays towards the Galactic center, which can be interpreted by the annihilation of dark matter(DM). We investigate the necessary conditions for a consistent explanation for the two photon excesses, especially the requirement on the width-to-mass ratio $\\Gamma/M$ and $\\phi$ decay channels, in a collection of DM models where the DM particle can be scalar, fermionionic and vector, and $\\phi$ can be generated through $s$-channel $gg$ fusion or $q\\bar q$ annihilation. We show that the minimally required $\\Gamma/M$ is determined by a single parameter proportional to $(m_{\\chi}/M)^{n}$, where the integer $n$ depends on the nature of the DM particle. We fi...

  16. Effective interpretations of a diphoton excess

    Berthier, Laure; Shepherd, William; Trott, Michael

    2015-01-01

    We discuss some consistency tests that must be passed for a successful explanation of a diphoton excess at larger mass scales, generated by a scalar or pseudoscalar state, possibly of a composite nature, decaying to two photons. Scalar states at mass scales above the electroweak scale decaying significantly into photon final states generically lead to modifications of Standard Model Higgs phenomenology. We characterise this effect using the formalism of Effective Field Theory (EFT) and study the modification of the effective couplings to photons and gluons of the Higgs. The modification of Higgs phenomenology comes about in a variety of ways. For scalar $0^+$states, the Higgs and the heavy boson can mix. Lower energy phenomenology gives a limit on the mixing angle, which gets generated at one loop in any such theory explaining the diphoton excess. Even if the mixing angle is set to zero, we demonstrate that a relation exists between lower energy Higgs data and a massive scaler decaying to diphoton final state...

  17. Effective interpretations of a diphoton excess

    Berthier, Laure; Cline, James M.; Shepherd, William; Trott, Michael

    2016-04-01

    We discuss some consistency tests that must be passed for a successful explanation of a diphoton excess at larger mass scales, generated by a scalar or pseudoscalar state, possibly of a composite nature, decaying to two photons. Scalar states at mass scales above the electroweak scale decaying significantly into photon final states generically lead to modifications of Standard Model Higgs phenomenology. We characterise this effect using the formalism of Effective Field Theory (EFT) and study the modification of the effective couplings to photons and gluons of the Higgs. The modification of Higgs phenomenology comes about in a variety of ways. For scalar 0+ states, a component of the Higgs and the heavy boson can mix. Lower energy phenomenology gives a limit on the mixing angle, which gets generated at one loop in any theory explaining the diphoton excess. Even if the mixing angle is set to zero, we demonstrate that a relation exists between lower energy Higgs data and a massive scalar decaying to diphoton final states. If the new boson is a pseudoscalar, we note that if it is composite, it is generic to have an excited scalar partner that can mix with a component of the Higgs, which has a stronger coupling to photons. In the case of a pseudoscalar, we also characterize how lower energy Higgs phenomenology is directly modified using EFT, even without assuming a scalar partner of the pseudoscalar state. We find that naturalness concerns can be accommodated, and that pseudoscalar models are more protected from lower energy constraints.

  18. Quantifying Protein Disorder through Measures of Excess Conformational Entropy.

    Rajasekaran, Nandakumar; Gopi, Soundhararajan; Narayan, Abhishek; Naganathan, Athi N

    2016-05-19

    Intrinsically disordered proteins (IDPs) and proteins with a large degree of disorder are abundant in the proteomes of eukaryotes and viruses, and play a vital role in cellular homeostasis and disease. One fundamental question that has been raised on IDPs is the process by which they offset the entropic penalty involved in transitioning from a heterogeneous ensemble of conformations to a much smaller collection of binding-competent states. However, this has been a difficult problem to address, as the effective entropic cost of fixing residues in a folded-like conformation from disordered amino acid neighborhoods is itself not known. Moreover, there are several examples where the sequence complexity of disordered regions is as high as well-folded regions. Disorder in such cases therefore arises from excess conformational entropy determined entirely by correlated sequence effects, an entropic code that is yet to be identified. Here, we explore these issues by exploiting the order-disorder transitions of a helix in Pbx-Homeodomain together with a dual entropy statistical mechanical model to estimate the magnitude and sign of the excess conformational entropy of residues in disordered regions. We find that a mere 2.1-fold increase in the number of allowed conformations per residue (∼0.7kBT favoring the unfolded state) relative to a well-folded sequence, or ∼2(N) additional conformations for a N-residue sequence, is sufficient to promote disorder under physiological conditions. We show that this estimate is quite robust and helps in rationalizing the thermodynamic signatures of disordered regions in important regulatory proteins, modeling the conformational folding-binding landscapes of IDPs, quantifying the stability effects characteristic of disordered protein loops and their subtle roles in determining the partitioning of folding flux in ordered domains. In effect, the dual entropy model we propose provides a statistical thermodynamic basis for the relative

  19. Extragalactic Gamma Ray Excess from Coma Supercluster Direction

    Pantea Davoudifar; S. Jalil Fatemi

    2011-09-01

    The origin of extragalactic diffuse gamma ray is not accurately known, especially because our suggestions are related to many models that need to be considered either to compute the galactic diffuse gamma ray intensity or to consider the contribution of other extragalactic structures while surveying a specific portion of the sky. More precise analysis of EGRET data however, makes it possible to estimate the diffuse gamma ray in Coma supercluster (i.e., Coma\\A1367 supercluster) direction with a value of ( > 30MeV) ≃ 1.9 × 10-6 cm-2 s-1, which is considered to be an upper limit for the diffuse gamma ray due to Coma supercluster. The related total intensity (on average) is calculated to be ∼ 5% of the actual diffuse extragalactic background. The calculated intensity makes it possible to estimate the origin of extragalactic diffuse gamma ray.

  20. FGK 22 $\\mu$m Excess Stars in LAMOST DR2 Stellar Catalog

    Wu, Chao-Jian; Liu, Kang; Li, Tan-Da; Yang, Ming; Lam, Man I; Yang, Fan; Wu, Yue; Zhang, Yong; Hou, Yonghui; Li, Guangwei

    2016-01-01

    Since the release of LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope) catalog, we have the opportunity to use the LAMOST DR2 stellar catalog and \\emph{WISE All-sky catalog} to search for 22 $\\mu$m excess candidates. In this paper, we present 10 FGK candidates which show an excess in the infrared (IR) at 22 $\\mu$m. The ten sources are all the newly identified 22 $\\mu$m excess candidates. Of these 10 stars, 5 stars are F type and 5 stars are G type. The criterion for selecting candidates is $K_s-[22]_{\\mu m}\\geq0.387$. In addition, we present the spectral energy distributions (SEDs) covering wavelength from optical to mid-infrared band. Most of them show an obvious excess from 12 $\\mu$m band and three candidates even show excess from 3.4 $\\mu$m. To characterize the amount of dust, we also estimate the fractional luminosity of ten 22 $\\mu$m excess candidates.

  1. A case of postmenopausal androgen excess.

    Lambrinoudaki, Irene; Dafnios, Nikos; Kondi-Pafiti, Agathi; Triantafyllou, Nikos; Karopoulou, Evangelia; Papageorgiou, Anastasia; Augoulea, Areti; Armeni, Eleni; Creatsa, Maria; Vlahos, Nikolaos

    2015-10-01

    Ovarian steroid cell tumors are very rare but potentially life-threatening neoplasms. They represent less than 0.1% of all ovarian tumors, typically present in premenopausal women and frequently manifest with virilization. Signs of hyperandrogenism may appear in postmenopausal women due to tumorοus and non-tumorοus adrenal and ovarian causes as well due to the normal aging process. In any case, steroid cell tumor should be suspected in postmenopausal women who present with rapid progressive androgen excess symptoms. This report describes a case of a 67-year-old postmenopausal woman with signs of hyperandrogenism, where an ovarian steroid cell tumor was diagnosed and treated by laparoscopic bilateral salpingo-oophorectomy and synchronous hysterectomy. PMID:26287476

  2. The Diphoton Excess Inspired Electroweak Baryogenesis

    Chao, Wei

    2016-01-01

    A resonance in the diphoton channel with invariant mass $m_{\\gamma \\gamma} = 750$ GeV was claimed by the ATLAS and CMS collaborations at the run-2 LHC with $\\sqrt{s}=13$ TeV. In this paper, we explain this diphoton excess as a pseudo-scalar singlet, which is produced at the LHC through gluon fusion with exotic scalar quarks running in the loop. We point out the scalar singlet might trigger a two-step electroweak phase transition, which can be strongly first oder. By assuming there are CP violations associated with interactions of the scalar quarks, the model is found to be able to generate adequate baryon asymmetry of the Universe through the electroweak baryogenesis mechanism. Constraints on the model are studied.

  3. Faint Infrared-Excess Field Galaxies FROGs

    Moustakas, L A; Zepf, S E; Bunker, A J

    1997-01-01

    Deep near-infrared and optical imaging surveys in the field reveal a curious population of galaxies that are infrared-bright (I-K>4), yet with relatively blue optical colors (V-I20, is high enough that if placed at z>1 as our models suggest, their space densities are about one-tenth of phi-*. The colors of these ``faint red outlier galaxies'' (fROGs) may derive from exceedingly old underlying stellar populations, a dust-embedded starburst or AGN, or a combination thereof. Determining the nature of these fROGs, and their relation with the I-K>6 ``extremely red objects,'' has implications for our understanding of the processes that give rise to infrared-excess galaxies in general. We report on an ongoing study of several targets with HST & Keck imaging and Keck/LRIS multislit spectroscopy.

  4. Excess plutonium disposition using ALWR technology

    The Office of Nuclear Energy of the Department of Energy chartered the Plutonium Disposition Task Force in August 1992. The Task Force was created to assess the range of practicable means of disposition of excess weapons-grade plutonium. Within the Task Force, working groups were formed to consider: (1) storage, (2) disposal,and(3) fission options for this disposition,and a separate group to evaluate nonproliferation concerns of each of the alternatives. As a member of the Fission Working Group, the Savannah River Technology Center acted as a sponsor for light water reactor (LWR) technology. The information contained in this report details the submittal that was made to the Fission Working Group of the technical assessment of LWR technology for plutonium disposition. The following aspects were considered: (1) proliferation issues, (2) technical feasibility, (3) technical availability, (4) economics, (5) regulatory issues, and (6) political acceptance

  5. Di-photon excess illuminates dark matter

    Backović, Mihailo; Mariotti, Alberto; Redigolo, Diego

    2016-03-01

    We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predicts a signal consistent with ˜ 300 GeV dark matter particle and ˜ 750 GeV scalar mediator in channels with large missing energy. This prediction is not yet severely bounded by LHC Run I searches and will be accessible at the LHC Run II in the jet plus missing energy channel with more luminosity. Our analysis also considers astro-physical constraints, pointing out that future direct detection experiments will be sensitive to this scenario.

  6. Excessive anticoagulation with warfarin or phenprocoumon may have multiple causes

    Meegaard, Peter Martin; Holck, Line H V; Pottegård, Anton;

    2012-01-01

    Excessive anticoagulation with vitamin K antagonists is a serious condition with a substantial risk of an adverse outcome. We thus found it of interest to review a large case series to characterize the underlying causes of excessive anticoagulation....

  7. A multiple regression analysis for accurate background subtraction in 99Tcm-DTPA renography

    A technique for accurate background subtraction in 99Tcm-DTPA renography is described. The technique is based on a multiple regression analysis of the renal curves and separate heart and soft tissue curves which together represent background activity. It is compared, in over 100 renograms, with a previously described linear regression technique. Results show that the method provides accurate background subtraction, even in very poorly functioning kidneys, thus enabling relative renal filtration and excretion to be accurately estimated. (author)

  8. Guaranteed investment contracts: distributed and undistributed excess return

    Kristian R. Miltersen; Persson, Svein-Arne

    2000-01-01

    Annual minimum rate of return guarantees are analyzed together with rules for distribution of positive excess return, i.e. investment returns in excess of the guaranteed minimum return. Together with the level of the annual minimum rate of return guarantee both the customer's and the insurer's fractions of the positive excess return are determined so that the market value of the insurer's capital inflow (determined by the fraction of the positive excess return) equals the market value of the ...

  9. The Avoidable Excess Burden of Broad-Based U.S. Taxes

    Nicolaus Tideman; Ebere Akobundu; Andrew Johns; Prapaiporn Wutthicharoen

    2002-01-01

    The excess burden of taxes can be reduced by shifting taxes from labor and capital onto land and by replacing progressive taxes with proportional taxes. This article uses a dynamic general equilibrium model to develop estimates of the magnitudes of reduction in excess burden that can be achieved in the United States by (1) incrementally shifting revenue from five broad-based taxes to land, (2) replacing the current progressive income tax with a flat tax, and (3) shifting as much taxation as p...

  10. Childhood maltreatment and the risk of pre-pregnancy obesity and excessive gestational weight gain.

    Diesel, Jill C; Bodnar, Lisa M; Day, Nancy L; Larkby, Cynthia A

    2016-07-01

    The objective of this study was to estimate whether maternal history of childhood maltreatment was associated with pre-pregnancy obesity or excessive gestational weight gain. Pregnant women (n = 472) reported pre-pregnancy weight and height and gestational weight gain and were followed up to 16 years post-partum when they reported maltreatment on the Childhood Trauma Questionnaire (CTQ). CTQ score ranged from no maltreatment (25) to severe maltreatment (125). Prenatal mental health modified the association between CTQ score and maternal weight (P childhood may contribute to pre-pregnancy obesity and excessive gestational weight gain. PMID:25138565

  11. A Systematic Search for Low-mass Field Stars with Large Infrared Excesses

    Theissen, Christopher; West, Andrew A.

    2016-06-01

    We present a systematic search for low-mass field stars exhibiting extreme infrared (IR) excesses. One potential cause of the IR excess is the collision of terrestrial worlds. Our input stars are from the Motion Verified Red Stars (MoVeRS) catalog. Candidate stars are then selected based on large deviations (3σ) between their measured Wide-field Infrared Survey Explorer (WISE) 12 μm flux and their expected flux (as estimated from stellar models). We investigate the stellar mass and time dependence for stars showing extreme IR excesses, using photometric colors from the Sloan Digital Sky Survey (SDSS) and Galactic height as proxies for mass and time, respectively. Using a Galactic kinematic model, we estimate the completeness for our sample as a function of line-of-sight through the Galaxy, estimating the number of low-mass stars that should exhibit extreme IR excesses within a local volume. The potential for planetary collisions to occur over a large range of stellar masses and ages has serious implications for the habitability of planetary systems around low-mass stars.

  12. Modelling and Identification of Oxygen Excess Ratio of Self-Humidified PEM Fuel Cell System

    Edi Leksono

    2012-07-01

    Full Text Available One essential parameter in fuel cell operation is oxygen excess ratio which describes comparison between reacted and supplied oxygen number in cathode. Oxygen excess ratio relates to fuel cell safety and lifetime. This paper explains development of air feed model and oxygen excess ratio calculation in commercial self-humidified PEM fuel cell system with 1 kW output power. This modelling was developed from measured data which was limited in open loop system. It was carried out to get relationship between oxygen excess ratio with stack output current and fan motor voltage. It generated fourth-order 56.26% best fit ARX linear polynomial model estimation (loss function = 0.0159, FPE = 0.0159 and second-order ARX nonlinear model estimation with 75 units of wavenet estimator with 84.95% best fit (loss function = 0.0139. The second-order ARX model linearization yielded 78.18% best fit (loss function = 0.0009, FPE = 0.0009.

  13. 30 CFR 75.401-1 - Excessive amounts of dust.

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Excessive amounts of dust. 75.401-1 Section 75... § 75.401-1 Excessive amounts of dust. The term “excessive amounts of dust” means coal and float coal dust in the air in such amounts as to create the potential of an explosion hazard....

  14. 46 CFR 154.546 - Excess flow valve: Closing flow.

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Excess flow valve: Closing flow. 154.546 Section 154.546... and Process Piping Systems § 154.546 Excess flow valve: Closing flow. (a) The rated closing flow of vapor or liquid cargo for an excess flow valve must be specially approved by the Commandant (CG-522)....

  15. The LHC diphoton excess as a W-ball

    Arbuzov, B A

    2016-01-01

    We consider a possibility of the 750 GeV diphoton excess at the LHC to correspond to heavy $WW$ zero spin resonance. The resonance appears due to the would-be anomalous triple interaction of the weak bosons, which is defined by well-known coupling constant $\\lambda$. The $\\gamma\\gamma\\,\\,750\\, GeV$ anomaly may correspond to weak isotopic spin 0 pseudoscalar state. We obtain estimates for the effect, which qualitatively agree with ATLAS data. Effects are predicted in a production of $W^+ W^-, (Z,\\gamma) (Z,\\gamma)$ via resonance $X_{PS}$ with $M_{PS} \\simeq 750\\,GeV$, which could be reliably checked at the upgraded LHC at $\\sqrt{s}\\,=\\,13\\, TeV$. In the framework of an approach to the spontaneous generation of the triple anomalous interaction its coupling constant is estimated to be $\\lambda = -\\,0.020\\pm 0.005$ in an agreement with existing restrictions. Specific prediction of the hypothesis is the significant effect in decay channel $X_{PS} \\to \\gamma\\,l^+\\,l^-\\,(l = e,\\,\\mu)$, which branching ratio occurs t...

  16. Laboratory Building for Accurate Determination of Plutonium

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  17. Can blind persons accurately assess body size from the voice?

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  18. A Comprehensive Census of Nearby Infrared Excess Stars

    Cotten, Tara H.; Song, Inseok

    2016-07-01

    The conclusion of the Wide-Field Infrared Survey Explorer (WISE) mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as the James Webb Space Telescope. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and all-sky WISE (AllWISE) catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3σ or 5σ significance of excess in the mid- and far-infrared. Through procedures including spectral energy distribution fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 “Prime” infrared excess stars, of which 74 are new sources of excess, and >1200 are “Reserved” stars, of which 950 are new sources of excess. The main catalog of infrared excess stars are nearby, bright, and either demonstrate excess in more than one passband or have infrared spectroscopy confirming the infrared excess. This study identifies stars that display a spectral energy distribution suggestive of a secondary or post-protoplanetary generation of dust, and they are ideal targets for future optical and infrared imaging observations. The final catalogs of stars summarize the past work using infrared excess to detect dust disks, and with the most extensive compilation of infrared excess stars (∼1750) to date, we investigate various relationships among stellar and disk parameters.

  19. Sonolência excessiva Excessive daytime sleepiness

    Lia Rita Azeredo Bittencourt

    2005-05-01

    Full Text Available A sonolência é uma função biológica, definida como uma probabilidade aumentada para dormir. Já a sonolência excessiva (SE, ou hipersonia, refere-se a uma propensão aumentada ao sono com uma compulsão subjetiva para dormir, tirar cochilos involuntários e ataques de sono, quando o sono é inapropriado. As principais causas de sonolência excessiva são a privação crônica de sono (sono insuficiente, a Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono (SAHOS, a narcolepsia, a Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros (SPI/MPM, Distúrbios do Ritmo Circadiano, uso de drogas e medicações e a hipersonia idiopática. As principais conseqüências são prejuízo no desempenho nos estudos, no trabalho, nas relações familiares e sociais, alterações neuropsicológicas e cognitivas e risco aumentado de acidentes. O tratamento da sonolência excessiva deve estar voltado para as causas específicas. Na privação voluntária do sono, aumentar o tempo de sono e higiene do sono, o uso do CPAP (Continuous Positive Airway Pressure na Síndrome da Apnéia e Hipopnéia Obstrutiva do Sono, exercícios e agentes dopaminérgicos na Síndrome das Pernas Inquietas/Movimentos Periódicos de Membros, fototerapia e melatonina nos Distúrbios do Ritmo Circadiano, retiradas de drogas que causam sonolência excessiva e uso de estimulantes da vigília.Sleepiness is a physiological function, and can be defined as increased propension to fall asleep. However, excessive sleepiness (ES or hypersomnia refer to an abnormal increase in the probability to fall asleep, to take involuntary naps, or to have sleep atacks, when sleep is not desired. The main causes of excessive sleepiness is chronic sleep deprivation, sleep apnea syndrome, narcolepsy, movement disorders during sleep, circadian sleep disorders, use of drugs and medications, or idiopathic hypersomnia. Social, familial, work, and cognitive impairment are among the consequences of

  20. Software cost estimation

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be estimated? (4) What can software project management expect from SCE models, how accurate are estimations which are made using these kind of models, and what are the pros and cons of cost estimatio...

  1. Production of ethanol from excess ethylene

    Kadhim, Adam S.; Carlsen, Kim B.; Bisgaard, Thomas

    2012-01-01

    of the designed process. The resulting design utilizes 75 million kg/year ethylene feed in order to obtain an ethyl alcohol production of 90.5 million kg/year. The total capital investment has been estimated to 43 million USD and the total product cost without depreciation estimated to 58.5 million...... USD. Furthermore, computer aided economic analysis method has been applied to investigate the potential economic improvements. This analysis helps to define targets for improvement, which are then achieved through heat and mass integration as well as mathematical optimization. In the final step, the...

  2. STUDY OF MOLECULAR INTERACTIONS IN BINARY MIXTURES USING EXCESS PARAMETERS

    Narendra Kolla

    2014-01-01

    Speeds of sound, densities and viscosities of the binary mixture of anisaldehyde with nonanol were measured over the entire mole fraction at (303.15, 308.15, 313.15 and 318.15) K E E and normal atmospheric pressure. Excess molar volume, V , Excess internal pressure, π , m E *E excess enthalpy, H , excess Gibb's free energy of activation for viscous flow, G , and excess E E viscosity,η have been calculated using experimental data. The V values are positive whereas m ...

  3. 75 FR 30846 - Monthly Report of Excess Income and Annual Report of Uses of Excess Income (Correction)

    2010-06-02

    ... URBAN DEVELOPMENT Monthly Report of Excess Income and Annual Report of Uses of Excess Income (Correction... comments on the subject proposal. Project owners are permitted to retain Excess Income for projects under... Income. The request must be submitted at least 90 days before the beginning of each fiscal year, or...

  4. Misperceived pre-pregnancy body weight status predicts excessive gestational weight gain: findings from a US cohort study

    Rifas-Shiman Sheryl L

    2008-12-01

    Full Text Available Abstract Background Excessive gestational weight gain promotes poor maternal and child health outcomes. Weight misperception is associated with weight gain in non-pregnant women, but no data exist during pregnancy. The purpose of this study was to examine the association of misperceived pre-pregnancy body weight status with excessive gestational weight gain. Methods At study enrollment, participants in Project Viva reported weight, height, and perceived body weight status by questionnaire. Our study sample comprised 1537 women who had either normal or overweight/obese pre-pregnancy BMI. We created 2 categories of pre-pregnancy body weight status misperception: normal weight women who identified themselves as overweight ('overassessors' and overweight/obese women who identified themselves as average or underweight ('underassessors'. Women who correctly perceived their body weight status were classified as either normal weight or overweight/obese accurate assessors. We performed multivariable logistic regression to determine the odds of excessive gestational weight gain according to 1990 Institute of Medicine guidelines. Results Of the 1029 women with normal pre-pregnancy BMI, 898 (87% accurately perceived and 131 (13% overassessed their weight status. 508 women were overweight/obese, of whom 438 (86% accurately perceived and 70 (14% underassessed their pre-pregnancy weight status. By the end of pregnancy, 823 women (54% gained excessively. Compared with normal weight accurate assessors, the adjusted odds of excessive gestational weight gain was 2.0 (95% confidence interval [CI]: 1.3, 3.0 in normal weight overassessors, 2.9 (95% CI: 2.2, 3.9 in overweight/obese accurate assessors, and 7.6 (95% CI: 3.4, 17.0 in overweight/obese underassessors. Conclusion Misperceived pre-pregnancy body weight status was associated with excessive gestational weight gain among both normal weight and overweight/obese women, with the greatest likelihood of excessive

  5. An assessment of planting flexibility options to reduce the excessive application of nitrogen fertilizer in the United States of America

    W-Y Huang; N D Uri

    1992-01-01

    The analysis in this paper is directed at estimating the marginal value of a base acre, the nitrogen rate of fertilizer use, the corn yield, and the excess nitrogen fertilizer application rate under alternative policy options designed to encourage planting flexibility in response to changing relative agricultural commodity prices. Encouragement of planting flexibility via an option of detaching deficiency payments from the base acreage is an effective way to reduce the excessive application r...

  6. Excessive internet use and depressive disorders.

    Mihajlović, Goran; Hinić, Darko; Damjanović, Aleksandar; Gajić, Tomislav; Dukić-Dejanović, Slavica

    2008-03-01

    Recent studies of Internet influence on behavioural disorders of its users, have created quite a polarised ambience. On the one hand, there are those who believe that the Internet is a new better medium for enabling various patterns of communication and social relations. On the other hand, others maintain that Internet use can lead to social isolation and other forms of psychological disorders, for an example depression. The aim of this work is a review of research attempts to confirm a connection between increased Internet use and psychological disorders, in the first place, depression. The number of studies on this subject is not very great thus far. This is mainly because depression and similar disorders are serious distorsions in basic psychological processes; this suggests how difficult it may be to work with such examinees, and how complex it may appear to distinguish etiological factors. These facts do not lessen the importance of the aim itself, i.e. defining potential consequences of excessive Internet use when it comes to psychological wellbeing, since the Internet is expected to become a basic form of social interaction in the near future, and consequently one of the major factors of socialisation and constitution of one's psychological identity. Due to that fact, the aim of this work is to indicate methodological and conceptual flaws of the studies which have attempted to make a connection between mood disorders and the Internet, so as to establish the base for future studies of the psychological consequences of Internet development. PMID:18376325

  7. Pharmacotherapy of Excessive Sleepiness: Focus on Armodafinil

    Michael Russo

    2009-01-01

    Full Text Available Excessive sleepiness (ES is responsible for significant morbidity and mortality due to its association with cardiovascular disease, cognitive impairment, and occupational and transport accidents. ES is also detrimental to patients’ quality of life, as it affects work and academic performance, social interactions, and personal relationships. Armodafinil is the R-enantiomer of the established wakefulness-promoting agent modafinil, which is a racemic mixture of both the R- and S-enantiomers. R-modafinil has a longer half-life and is present at higher circulating concentrations than the S-enantiomer following chronic administration of modafinil and may therefore be the enantiomer predominantly responsible for the beneficial effects of the racemic compound. Armodafinil has been approved by the Food and Drug Administration for the improvement of ES associated with narcolepsy, shift-work disorder, and obstructive sleep apnea following a program of randomized, placebo-controlled clinical trials. This comprehensive medication review discusses the pharmacologic profile of armodafinil and the current evidence regarding its efficacy, safety, and tolerability; appraises patient-reported outcomes data; and suggests additional indications in which armodafinil may be of use.

  8. Cool WISPs for stellar cooling excesses

    Giannotti, Maurizio; Irastorza, Igor; Redondo, Javier; Ringwald, Andreas

    2016-05-01

    Several stellar systems (white dwarfs, red giants, horizontal branch stars and possibly the neutron star in the supernova remnant Cassiopeia A) show a mild preference for a non-standard cooling mechanism when compared with theoretical models. This exotic cooling could be provided by Weakly Interacting Slim Particles (WISPs), produced in the hot cores and abandoning the star unimpeded, contributing directly to the energy loss. Taken individually, these excesses do not show a strong statistical weight. However, if one mechanism could consistently explain several of them, the hint could be significant. We analyze the hints in terms of neutrino anomalous magnetic moments, minicharged particles, hidden photons and axion-like particles (ALPs). Among them, the ALP or a massless HP represent the best solution. Interestingly, the hinted ALP parameter space is accessible to the next generation proposed ALP searches, such as ALPS II and IAXO and the massless HP requires a multi TeV energy scale of new physics that might be accessible at the LHC.

  9. Diboson Excess from a New Strong Force

    Georgi, Howard

    2016-01-01

    We explore a "partial unification" model that could explain the diphoton event excess around $750 \\, \\rm GeV$ recently reported by the LHC experiments. A new strong gauge group is combined with the ordinary color and hypercharge gauge groups. The VEV responsible for the combination is of the order of the $SU(2)\\times U(1)$ breaking scale, but the coupling of the new physics to standard model particles is suppressed by the strong interaction of the new gauge group. This simple extension of the standard model has a rich phenomenology, including composite particles of the new confining gauge interaction, a coloron and a $Z'$ which are rather weakly coupled to standard model particles, and massive vector bosons charged under both the ordinary color and hypercharge gauge groups and the new strong gauge group. The new scalar glueball could have mass of around $750 \\, \\rm GeV$, be produced by gluon fusion and decay into two photons, both through loops of the new massive vector bosons. The simplest version of the mod...

  10. Quirky Explanations for the Diphoton Excess

    Curtin, David

    2015-01-01

    We propose two simple quirk models to explain the recently reported 750 GeV diphoton excesses at ATLAS and CMS. It is already well-known that a real singlet scalar $\\phi$ with Yukawa couplings $\\phi \\bar X X$ to vector-like fermions $X$ with mass $m_X > m_\\phi/2$ can easily explain the observed signal, provided $X$ carries both SM color and electric charge. We instead consider first the possibility that the pair production of a fermion, charged under both SM gauge groups and a confining $SU(3)_v$ gauge group, is responsible. If pair produced it forms a quirky bound state, which promptly annihilates into gluons, photons, or v-gluons. This has the advantage of being able to explain a sizable width for the diphoton resonance, but is already in some tension with existing displaced searches and dijet resonance bounds. We therefore propose a hybrid Quirk-Scalar model, in which the fermion of the simple $\\phi \\bar X X$ toy model is charged under the additional $SU(3)_v$ confining gauge group. Constraints on the new ...

  11. Medical Ethics and Protection from Excessive Radiation

    Among artificial sources of ionic radiation people are most often exposed to those emanating from X-ray diagnostic equipment. However, responsible usage of X-ray diagnostic methods may considerably reduce the general exposure to radiation. A research on rational access to X-ray diagnostic methods conducted at the X-ray Cabinet of the Tresnjevka Health Center was followed by a control survey eight years later of the rational methods applied, which showed that the number of unnecessary diagnostic examining was reduced for 34 % and the diagnostic indications were 10-40 $ more precise. The results therefore proved that radiation problems were reduced accordingly. The measures applied consisted of additional training organized for health care workers and a better education of the population. The basic element was then the awareness of both health care workers and the patients that excessive radiation should be avoided. The condition for achieving this lies in the moral responsibility of protecting the patients' health. A radiologist, being the person that promotes and carries out this moral responsibility, should organize and hold continual additional training of medical doctors, as well as education for the patients, and apply modern equipment. The basis of such an approach should be established by implementing medical ethics at all medical schools and faculties, together with the promotion of a wider intellectual and moral integrity of each medical doctor. (author)

  12. What controls deuterium excess in global precipitation?

    S. Pfahl

    2014-04-01

    Full Text Available The deuterium excess (d of precipitation is widely used in the reconstruction of past climatic changes from ice cores. However, its most common interpretation as moisture source temperature cannot directly be inferred from present-day water isotope observations. Here, we use a new empirical relation between d and near-surface relative humidity (RH together with reanalysis data to globally predict d of surface evaporation from the ocean. The very good quantitative agreement of the predicted hemispherically averaged seasonal cycle with observed d in precipitation indicates that moisture source relative humidity, and not sea surface temperature, is the main driver of d variability on seasonal timescales. Furthermore, we review arguments for an interpretation of long-term palaeoclimatic d changes in terms of moisture source temperature, and we conclude that there remains no sufficient evidence that would justify to neglect the influence of RH on such palaeoclimatic d variations. Hence, we suggest that either the interpretation of d variations in palaeorecords should be adapted to reflect climatic influences on RH during evaporation, in particular atmospheric circulation changes, or new arguments for an interpretation in terms of moisture source temperature will have to be provided based on future research.

  13. Di-photon excess at LHC and the gamma ray excess at the Galactic Centre

    Hektor, Andi

    2016-01-01

    Motivated by the recent indications for a 750 GeV resonance in the di-photon final state at the LHC, in this work we analyse the compatibility of the excess with the broad photon excess detected at the Galactic Centre. Intriguingly, by analysing the parameter space of an effective models where a 750 GeV pseudoscalar particles mediates the interaction between the Standard Model and a scalar dark sector, we prove the compatibility of the two signals. We show, however, that the LHC mono-jet searches and the Fermi LAT measurements strongly limit the viable parameter space. We comment on the possible impact of cosmic antiproton flux measurement by the AMS-02 experiment.

  14. Convergence Insufficiency/Divergence Insufficiency Convergence Excess/Divergence Excess: Some Facts and Fictions.

    Khawam, Edward; Abiad, Bachir; Boughannam, Alaa; Saade, Joanna; Alameddine, Ramzi

    2015-01-01

    Great discrepancies are often encountered between the distance fixation and the near-fixation esodeviations and exodeviations. They are all attributed to either anomalies of the AC/A ratio or anomalies of the fusional convergence or divergence amplitudes. We report a case with pseudoconvergence insufficiency and another one with pseudoaccommodative convergence excess. In both cases, conv./div. excess and insufficiency were erroneously attributed to anomalies of the AC/A ratio or to anomalies of the fusional amplitudes. Our purpose is to show that numerous factors, other than anomalies in the AC/A ratio or anomalies in the fusional conv. or divergence amplitudes, can contaminate either the distance or the near deviations. This results in significant discrepancies between the distance and the near deviations despite a normal AC/A ratio and normal fusional amplitudes, leading to erroneous diagnoses and inappropriate treatment models. PMID:26351603

  15. An empirical study of serial correlation in stock returns : cause-effect relationship for excess returns from momentum trading in the Norwegian market

    Brodin, Maximilian; Abusdal, Øyvind

    2008-01-01

    This paper documents the maximum theoretical excess return on the market to 3.8% monthly from momentum trading in Norway and estimates the economical excess return to be marginally higher than 1% per month when accounting for microstructure influences. We find that the excess returns of various momentum strategies are not explained by systematic risk or exposure to other factors such as size or book‐to‐market value. We uncover a positive correlation between types of investor an...

  16. Nathan Field's theatre of excess: youth culture and bodily excess on the early modern stage

    Orman, S.

    2014-01-01

    This dissertation argues for the reappraisal of Jacobean boy actors by acknowledging their status as youths. Focussing on the repertory of The Children of the Queen’s Revels and using the acting and playwriting career of Nathan Field as an extensive case-study, it argues, via an investigation into cultural and theatrical bodily excess, that the theatre was a profoundly significant space in which youth culture was shaped and problematised. In defining youth culture as a space for the assertion...

  17. Invariant Image Watermarking Using Accurate Zernike Moments

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  18. Excess molar volumes for CO sub 2 -CH sub 4 -N sub 2 mixtures

    Seitz, J.C. (Oak Ridge National Lab., TN (United States) Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences); Blencoe, J.G.; Joyce, D.B. (Oak Ridge National Lab., TN (United States)); Bodnar, R.J. (Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences)

    1992-01-01

    Vibrating-tube densimetry experiments are being performed to determine the excess molar volumes of single-phase CO{sub 2}-CH{sub 4}-N{sub 2} gas mixtures at pressures as high as 3500 bars and temperatures up to 500{degrees}C. In our initial experiments, we determined the P-V-T properties of: (1) CO{sub 2}-CH{sub 4}, CO{sub 2}N{sub 2}, CH{sub 4}-N{sub 2}, and CO{sub 2}-CH{sub 4}-N{sub 2} mixtures at 1000 bars. 50 {degrees}C: and (2) CO{sub 2}-CH{sub 4} mixtures from 100 to 1000 bars at 100{degrees}C. Excess molar volumes in the binary subsystems are very accurately represented by two-parameter Margules equations. Experimentally determined excess molar volumes are in fair to poor agreement with predictions from published equations of state. Geometric projection techniques based on binary system data yield from published equations of state. Geometric projection techniques based on binary system data yield calculated excess molar volume for CO{sub 2}-CH{sub 4}-N{sub 2} mixtures that are in good agreement with our experimental data. 7 refs., 8 figs.

  19. Using identity by descent estimation with dense genotype data to detect positive selection.

    Han, Lide; Abney, Mark

    2013-02-01

    Identification of genomic loci and segments that are identical by descent (IBD) allows inference on problems such as relatedness detection, IBD disease mapping, heritability estimation and detection of recent or ongoing positive selection. Here, employing a novel statistical method, we use IBD to find signals of selection in the Maasai from Kinyawa, Kenya (MKK). In doing so, we demonstrate the advantage of statistical tools that can probabilistically estimate IBD sharing without having to thin genotype data because of linkage disequilibrium (LD), and that allow for both inbreeding and more than one allele to be shared IBD. We use our novel method, GIBDLD, to estimate IBD sharing between all pairs of individuals at all genotyped SNPs in the MKK, and, by looking for genomic regions showing excess IBD sharing in unrelated pairs, find loci that are known to have undergone recent selection (eg, the LCT gene and the HLA region) as well as many novel loci. Intriguingly, those loci that show the highest amount of excess IBD, with the exception of HLA, also show a substantial number of unrelated pairs sharing all four of their alleles IBD. In contrast to other IBD detection methods, GIBDLD provides accurate probabilistic estimates at each locus for all nine possible IBD sharing states between a pair of individuals, thus allowing for consanguinity, while also modeling LD, thus removing the need to thin SNPs. These characteristics will prove valuable for those doing genetic studies, and estimating IBD, in the wide variety of human populations. PMID:22781100

  20. Observation-based global biospheric excess radiocarbon inventory 1963-2005

    Naegler, Tobias; Levin, Ingeborg

    2009-09-01

    For the very first time, we present an observation-based estimate of the temporal development of the biospheric excess radiocarbon (14C) inventory IB14,E, i.e., the change in the biospheric 14C inventory relative to prebomb times (1940s). IB14,E was calculated for the period 1963-2005 with a simple budget approach as the difference between the accumulated excess 14C production by atmospheric nuclear bomb tests and the nuclear industry and observation-based reconstructions of the excess 14C inventories in the atmosphere and the ocean. IB14,E increased from the late 1950s onward to maximum values between 126 and 177 × 1026 atoms 14C between 1981 and 1985. In the early 1980s, the biosphere turned from a sink to a source of excess 14C. Consequently, IB14,E decreased to values of 108-167 × 1026 atoms 14C in 2005. The uncertainty of IB14,E is dominated by uncertainties in the total bomb 14C production and the oceanic excess 14C inventory. Unfortunately, atmospheric Δ14CO2 from the early 1980s lack the necessary precision to reveal the expected small change in the amplitude and phase of atmospheric Δ14C seasonal cycle due to the sign flip in the biospheric net 14C flux during that time.

  1. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  2. Simple, Accurate, and Robust Nonparametric Blind Super-Resolution

    Shao, Wen-Ze; Elad, Michael

    2015-01-01

    This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed...

  3. Sub-millimeter to centimeter excess emission from the Magellanic Clouds. I. Global spectral energy distribution

    Israel, F P; Raban, D; Reach, W T; Bot, C; Oonk, J B R; Ysard, N; Bernard, J P

    2010-01-01

    In order to reconstruct the global SEDs of the Magellanic Clouds over eight decades in spectral range, we combined literature flux densities representing the entire LMC and SMC respectively, and complemented these with maps extracted from the WMAP and COBE databases covering the missing the 23--90 GHz (13--3.2 mm) and the poorly sampled 1.25--250 THz (240--1.25 micron). We have discovered a pronounced excess of emission from both Magellanic Clouds, but especially the SMC, at millimeter and sub-millimeter wavelengths. We also determined accurate thermal radio fluxes and very low global extinctions for both LMC and SMC. Possible explanations are briefly considered but as long as the nature of the excess emission is unknown, the total dust masses and gas-to-dust ratios of the Magellanic Clouds cannot reliably be determined.

  4. Water coordination structures and the excess free energy of the liquid

    Merchant, Safir; Asthagiri, D

    2011-01-01

    For a distinguished water molecule, the solute water, we assess the contribution of each coordination state to its excess chemical potential, using a molecular aufbau approach. In this approach, we define a coordination sphere, the inner-shell, and separate the excess chemical potential into packing, outer-shell, and local chemical contributions; the coordination state is defined by the number of solvent water molecules within the coordination sphere. The packing term accounts for the free energy of creating a solute-free coordination sphere in the liquid. The outer-shell term accounts for the interaction of the solute with the fluid outside the coordination sphere and it is accurately described by a Gaussian model of hydration for coordination radii greater than the minimum of the oxygen-oxygen pair correlation function. Consistent with the conventional radial cut-off used for defining hydrogen-bonds in liquid water, theory helps identify a chemically meaningful coordination radius. The local chemical contri...

  5. Estimating Cloud Cover

    Moseley, Christine

    2007-01-01

    The purpose of this activity was to help students understand the percentage of cloud cover and make more accurate cloud cover observations. Students estimated the percentage of cloud cover represented by simulated clouds and assigned a cloud cover classification to those simulations. (Contains 2 notes and 3 tables.)

  6. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Romesh Silva

    Full Text Available BACKGROUND: Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used. METHODS AND FINDINGS: Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility. CONCLUSIONS: Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations

  7. How many standard area diagram sets are needed for accurate disease severity assessment

    Standard area diagram sets (SADs) are widely used in plant pathology: a rater estimates disease severity by comparing an unknown sample to actual severities in the SADs and interpolates an estimate as accurately as possible (although some SADs have been developed for categorizing disease too). Most ...

  8. Radiation. A buzz word for excessive fears

    The necessity of accepting that risk is an inherent part of daily life and also of acquiring a sense of perspective with respect to such risks, especially with respect to radiation, is discussed. Estimations of radiation risks are examined and compared to other risk factors such as overweight and cigarette smoking. It is stated that public perception of radiation has a direct bearing on the use of nuclear power, that balancing risks and benefits must become a standard approach to evaluating environmental matters and that the present crisis in confidence over energy requires this approach. (UK)

  9. Accurate atomic data for industrial plasma applications

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  10. More accurate picture of human body organs

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  11. Excess Weapons Plutonium Immobilization in Russia

    Jardine, L.; Borisov, G.B.

    2000-04-15

    The joint goal of the Russian work is to establish a full-scale plutonium immobilization facility at a Russian industrial site by 2005. To achieve this requires that the necessary engineering and technical basis be developed in these Russian projects and the needed Russian approvals be obtained to conduct industrial-scale immobilization of plutonium-containing materials at a Russian industrial site by the 2005 date. This meeting and future work will provide the basis for joint decisions. Supporting R&D projects are being carried out at Russian Institutes that directly support the technical needs of Russian industrial sites to immobilize plutonium-containing materials. Special R&D on plutonium materials is also being carried out to support excess weapons disposition in Russia and the US, including nonproliferation studies of plutonium recovery from immobilization forms and accelerated radiation damage studies of the US-specified plutonium ceramic for immobilizing plutonium. This intriguing and extraordinary cooperation on certain aspects of the weapons plutonium problem is now progressing well and much work with plutonium has been completed in the past two years. Because much excellent and unique scientific and engineering technical work has now been completed in Russia in many aspects of plutonium immobilization, this meeting in St. Petersburg was both timely and necessary to summarize, review, and discuss these efforts among those who performed the actual work. The results of this meeting will help the US and Russia jointly define the future direction of the Russian plutonium immobilization program, and make it an even stronger and more integrated Russian program. The two objectives for the meeting were to: (1) Bring together the Russian organizations, experts, and managers performing the work into one place for four days to review and discuss their work with each other; and (2) Publish a meeting summary and a proceedings to compile reports of all the excellent

  12. The Excessive Profits of Defense Contractors: Evidence and Determinants

    Wang, Chong; Miguel, Joseph San

    2012-01-01

    A long controversial issue that divides academics, government officials, elected representatives, and the U.S. defense industry is whether defense contractors earn abnormal or excessive profits at the expense of taxpayers. Using an innovative industry-year-size matched measure of excessive profit, we demonstrate three findings. First, when compared with their industry peers, defense contractors earn excessive profits. This result is evident when profit is measured by Return ...

  13. Futures trading and the excess comovement of commodity prices

    Le Pen, Yannick; Sévi, Benoît

    2013-01-01

    We empirically reinvestigate the issue of excess comovement of commodity prices initially raised in Pindyck and Rotemberg (1990) and show that excess comovement, when it exists, can be related to hedging and speculative pressure in commodity futures markets. Excess comovement appears when commodity prices remain correlated even after adjusting for the impact of common factors. While Pindyck and Rotemberg and following c ontributions examine this issue using a relevant but arbitrary set of con...

  14. Increasing Trends in the Excess Comovement of Commodity Prices

    Ohashi, Kazuhiko; OKIMOTO, Tatsuyoshi

    2013-01-01

    In this paper, we investigate whether excess correlations among seemingly unrelated commodity returns have increased recently, and if so, how they were achieved. To this end, we generalize the model of excess comovement, originated by Pindyck and Rotemberg (1990) and extended by Deb, Trivedi, and Varangis (1996), to develop the smooth-transition dynamic conditional correlation (STDCC) model that can capture long-run trends and short-run dynamics in excess comovements. Using commodity returns ...

  15. Excess Demand and Cost Relationships Among Kentucky Nursing Homes

    Mark A Davis; Freeman, James W.

    1994-01-01

    This article examines the influence of excess demand on nursing home costs. Previous work indicates that excess demand, reflected in a pervasive shortage of nursing home beds, constrains market competition and patient care expenditures. According to this view, nursing homes located in under-bedded markets can reduce costs and quality with impunity because there is no pressure to compete for residents. Predictions based on the excess demand argument were tested using 1989 data from a sample of...

  16. Fluctuation theorems for excess and housekeeping heats for underdamped systems

    Lahiri, Sourabh; Jayannavar, A. M.

    2013-01-01

    We present a simple derivation of the integral fluctuation theorems for excess housekeeping heat for an underdamped Langevin system, without using the concept of dual dynamics. In conformity with the earlier results, we find that the fluctuation theorem for housekeeping heat holds when the steady state distributions are symmetric in velocity, whereas there is no such requirement for the excess heat. We first prove the integral fluctuation theorem for the excess heat, and then show that it nat...

  17. Potentiometric titration of excess cadmium in cadmium selenide

    A simple and rapid potentiometric technique for determining excess cadmium in CdSe has been developed. Reaction with AgNO3 is used for sample treatment. Silver, formed in the AgNO3 reaction with excess Cd is determined with the help of KI. When using the given method of analysis the relative standard deviation is equal to 0.08-0.21. The real detection limit of excess cadmium is 9x10-7 g

  18. Are exchange rates excessively volatile? And what does "excessively volatile" mean, anyway?

    Gordon M. Bodnar; Leonardo Bartolini

    1996-01-01

    Using data for the major currencies from 1973 to 1994, we apply recent tests of asset price volatility to re-examine whether exchange rates have been excessively volatile with respect to the predictions of the monetary model of the exchange rate and of standard extensions that allow for sticky prices, sluggish money adjustment, and time-varying risk premia. Consistent with previous evidence from regression-based tests, most of the models that we examine are rejected by our volatility-based te...

  19. A Comprehensive Census of Nearby Infrared Excess Stars

    Cotten, Tara H

    2016-01-01

    The conclusion of the WISE mission presents an opportune time to summarize the history of using excess emission in the infrared as a tracer of circumstellar material and exploit all available data for future missions such as JWST. We have compiled a catalog of infrared excess stars from peer-reviewed articles and perform an extensive search for new infrared excess stars by cross-correlating the Tycho-2 and AllWISE catalogs. We define a significance of excess in four spectral type divisions and select stars showing greater than either 3$\\sigma$ or 5$\\sigma$ significance of excess in the mid- and far-infrared. Through procedures including SED fitting and various image analyses, each potential excess source was rigorously vetted to eliminate false-positives. The infrared excess stars from the literature and the new stars found through the Tycho-2 and AllWISE cross-correlation produced nearly 500 `Prime' infrared excess stars and $\\geq$1200 `Reserved' stars. The main catalog of infrared excess stars are nearby, b...

  20. Poverty, time, and place: variation in excess mortality across selected US populations, 1980-1990

    Geronimus, A T; Bound, J.; Waidmann, T. A.

    1999-01-01

    STUDY OBJECTIVE: To describe variation in levels and causes of excess mortality and temporal mortality change among young and middle aged adults in a regionally diverse set of poor local populations in the USA. DESIGN: Using standard demographic techniques, death certificate and census data were analysed to make sex specific population level estimates of 1980 and 1990 death rates for residents of selected areas of concentrated poverty. For comparison, data for whites and blacks nationwi...

  1. Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Mean Infiltration-Excess Overland Flow, 2002

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean value for infiltration-excess overland flow as estimated by the watershed model TOPMODEL, compiled for every MRB_E2RF1...

  2. Excess enthalpy, density, and speed of sound determination for the ternary mixture (methyl tert-butyl ether + 1-butanol + n-hexane)

    Mascato, Eva [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Mariano, Alejandra [Laboratorio de Fisicoquimica, Departamento de Quimica, Facultad de Ingenieria, Universidad Nacional del Comahue, 8300 Neuquen (Argentina); Pineiro, Manuel M. [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain)], E-mail: mmpineiro@uvigo.es; Legido, Jose Luis [Departamento de Fisica Aplicada, Facultade de Ciencias, Universidade de Vigo, E-36310 Vigo (Spain); Paz Andrade, M.I. [Departamento de Fisica Aplicada, Facultade de Fisica, Universidade de Santiago de Compostela, E-15706 Santiago de Compostela (Spain)

    2007-09-15

    Density, ({rho}), and speed of sound, (u), from T = 288.15 to T = 308.15 K, and excess molar enthalpies, (h{sup E}) at T = 298.15 K, have been measured over the entire composition range for (methyl tert-butyl ether + 1-butanol + n-hexane). In addition, excess molar volumes, V{sup E}, and excess isentropic compressibility, {kappa}{sub s}{sup E}, were calculated from experimental data. Finally, experimental excess enthalpies results are compared with the estimations obtained by applying the group-contribution models of UNIFAC (in the versions of Dang and Tassios, Larsen et al., Gmehling et al.), and DISQUAC.

  3. Accurate 3D quantification of the bronchial parameters in MDCT

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  4. How dusty is alpha Centauri? Excess or non-excess over the infrared photospheres of main-sequence stars

    Wiegert, J; Thébault, P; Olofsson, G; Mora, A; Bryden, G; Marshall, J P; Eiroa, C; Montesinos, B; Ardila, D; Augereau, J C; Aran, A Bayo; Danchi, W C; del Burgo, C; Ertel, S; Fridlund, M C W; Hajigholi, M; Krivov, A V; Pilbratt, G L; Roberge, A; White, G J

    2014-01-01

    [Abridged] Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby binary aCentauri have higher than solar metallicities, which is thought to promote giant planet formation. We aim to determine the level of emission from debris in the aCen system. Having already detected the temperature minimum, Tmin, of aCenA, we here attempt to do so also for the companion aCenB. Using the aCen stars as templates, we study possible effects Tmin may have on the detectability of unresolved dust discs around other stars. We use Herschel and APEX photometry to determine the stellar spectral energy distributions. In addition, we use APEX for spectral line mapping to study the complex background around aCen seen in the photometric images. Models of stellar atmospheres and discs are used to estimate the amount of debris around these stars. For solar-type stars, a fractional dust luminosity fd 2e-7 could account for SEDs that do not exhibit the Tmin-effect. Slight excesses ...

  5. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  6. How Accurate is inv(A)*b?

    Druinsky, Alex

    2012-01-01

    Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.

  7. Accurate guitar tuning by cochlear implant musicians.

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  8. Accurate guitar tuning by cochlear implant musicians.

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  9. Accurate Finite Difference Methods for Option Pricing

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  10. Accurate, reproducible measurement of blood pressure.

    Campbell, N. R.; Chockalingam, A; Fodor, J. G.; McKay, D. W.

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine con...

  11. Accurate variational forms for multiskyrmion configurations

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  12. Efficient Accurate Context-Sensitive Anomaly Detection

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  13. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  14. Accurate phase-shift velocimetry in rock

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  15. Excess mortality monitoring in England and Wales during the influenza A(H1N1) 2009 pandemic.

    Hardelid, P; Andrews, N; Pebody, R

    2011-09-01

    We present the results from a novel surveillance system for detecting excess all-cause mortality by age group in England and Wales developed during the pandemic influenza A(H1N1) 2009 period from April 2009 to March 2010. A Poisson regression model was fitted to age-specific mortality data from 1999 to 2008 and used to predict the expected number of weekly deaths in the absence of extreme health events. The system included adjustment for reporting delays. During the pandemic, excess all-cause mortality was seen in the 5-14 years age group, where mortality was flagged as being in excess for 1 week after the second peak in pandemic influenza activity; and in age groups >45 years during a period of very cold weather. This new system has utility for rapidly estimating excess mortality for other acute public health events such as extreme heat or cold weather. PMID:21439100

  16. "Excess Demand Functions with Incomplete Markets — A Global Result"

    Momi, Takeshi

    2002-01-01

    "The purpose of this paper is to give a global characterization of excess demand functions in a two period exchange economy with in-completenreal asset markets. We show that continuity, homogeneity and Walras’ law characterize the aggregate excess demand functions on any compact price set which maintains the dimension of the budget set."

  17. Excess demand functions with incomplete markets, a global result

    Momi, Takeshi

    2002-01-01

    The purpose of this paper is to give a global characterization of excess demand functions in a two period exchange economy with incomplete real asset markets. We show that continuity, homogeneity and Walras� law characterize the aggregate excess demand functions on any compact price set which maintains the dimension of the budget set.

  18. Criminal Liability of Managers for Excessive Risk-Taking?

    Tosza, S.T.

    2016-01-01

    The aim of the thesis was to analyse and evaluate the criminalisation of excessively risky decisions taken by managers of limited liability companies. The potentially disastrous consequences of excessive risk-taking were powerfully highlighted by the most recent financial crunch, although its danger

  19. On the thermal properties of nuclear matter with neutron excess

    The schematic model of nuclear matter proposed by Gomes, Walecka and Weisskopf which was generalized to finite temperatures including interacting Fermi particle aspects is extended here to include nuclear matter with neutron excess. The level density parameter as a function of neutron excess is calculated. Also the temperature dependence of the equilibrium Fermi momentum is calculated. (author)

  20. 14 CFR 158.39 - Use of excess PFC revenue.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Use of excess PFC revenue. 158.39 Section...) AIRPORTS PASSENGER FACILITY CHARGES (PFC'S) Application and Approval § 158.39 Use of excess PFC revenue. (a) If the PFC revenue remitted to the public agency, plus interest earned thereon, exceeds the...

  1. 34 CFR Appendix A to Part 300 - Excess Costs Calculation

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false Excess Costs Calculation A Appendix A to Part 300 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION... CHILDREN WITH DISABILITIES Pt. 300, App. A Appendix A to Part 300—Excess Costs Calculation Except...

  2. Business cycle fluctuations and excess sensitivity of private consumption

    Gert Peersman; Lorenzo Pozzi

    2007-01-01

    We investigate whether business cycle fluctuations affect the degree of excess sensitivity of private consumption growth to disposable income growth. Using multivariate state space methods and quarterly US data for the period 1965-2000 we find that excess sensitivity is significantly higher during recessions.

  3. 30 CFR 75.323 - Actions for excessive methane.

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Actions for excessive methane. 75.323 Section... excessive methane. (a) Location of tests. Tests for methane concentrations under this section shall be made.... (1) When 1.0 percent or more methane is present in a working place or an intake air course,...

  4. The Role of Alcohol Advertising in Excessive and Hazardous Drinking.

    Atkin, Charles K.; And Others

    1983-01-01

    Examined the influence of advertising on excessive and dangerous drinking in a survey of 1,200 adolescents and young adults who were shown advertisements depicting excessive consumption themes. Results indicated that advertising stimulates consumption levels, which leads to heavy drinking and drinking in dangerous situations. (JAC)

  5. 'Excess heat' induced by deuterium flux in palladium film

    An early work at NASA, USA has repeated at INFICON Balzers, Liechtenstein in 2005. It is a confirmation of the correlation between excess heat and deuterium flux permeating through the Pd film. The maximum excess power density is of the order of 100 W/cm3 (Pd). (author)

  6. Excess supply and low demand; Ueberangebot und Unterkonsum

    Anon.

    2010-04-15

    Actually there is an excess supply of natural gas in the market. Is this a sustainable development or a temporary phenomenon? Marc Hall, managing director Bayerngas GmbH (Munich, Federal Republic of Germany) does not hold the excess offer responsible, but the small demand in the industry due to the economic crisis for the plentifully existing amounts of natural gas.

  7. 7 CFR 51.1014 - Excessively rough texture.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Excessively rough texture. 51.1014 Section 51.1014 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... texture. Excessively rough texture means that the skin is badly ridged or very decidedly lumpy....

  8. 12 CFR 740.3 - Advertising of excess insurance.

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Advertising of excess insurance. 740.3 Section 740.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.3 Advertising of excess insurance....

  9. A Practical Approach For Excess Bandwidth Distribution for EPONs

    Elrasad, Amr

    2014-03-09

    This paper introduces a novel approach called Delayed Excess Scheduling (DES), which practically reuse the excess bandwidth in EPONs system. DES is suitable for the industrial deployment as it requires no timing constraint and achieves better performance compared to the previously reported schemes.

  10. 19 CFR 10.625 - Refunds of excess customs duties.

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Refunds of excess customs duties. 10.625 Section 10.625 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... and Apparel Goods § 10.625 Refunds of excess customs duties. (a) Applicability. Section 205 of...

  11. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  12. Estimation of physical parameters in induction motors

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;

    1994-01-01

    Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...

  13. Mechanisms linking excess adiposity and carcinogenesis promotion

    Ana I. Pérez-Hernández

    2014-05-01

    Full Text Available Obesity constitutes one of the most important metabolic diseases being associated to insulin resistance development and increased cardiovascular risk. Association between obesity and cancer has also been well-established for several tumor types, such as breast cancer in postmenopausal women, colorectal and prostate cancer. Cancer is the first death cause in developed countries and the second one in developing countries, with high incidence rates around the world. Furthermore, it has been estimated that 15-20% of all cancer deaths may be attributable to obesity. Tumor growth is regulated by interactions between tumor cells and their tissue microenvironment. In this sense, obesity may lead to cancer development through dysfunctional adipose tissue and altered signaling pathways. In this review, three main pathways relating obesity and cancer development are examined: i inflammatory changes leading to macrophage polarization and altered adipokine profile; ii insulin resistance development; and iii adipose tissue hypoxia. Since obesity and cancer present a high prevalence, the association between these conditions is of great public health significance and studies showing mechanisms by which obesity lead to cancer development and progression are needed to improve prevention and management of these diseases.

  14. Deuterium excess in precipitation and its climatological significance

    The climatological significance of the deuterium excess parameter for tracing precipitation processes is discussed with reference to data collected within the IAEA/WMO Global Network for Isotopes in Precipitation (GNIP) programme. Annual and monthly variations in deuterium excess, and their primary relationships with δ18O, temperature, vapour pressure and relative humidity are used to demonstrate fundamental controls on deuterium excess for selected climate stations and transects. The importance of deuterium excess signals arising from ocean sources versus signals arising from air mass modification during transport over the continents is reviewed and relevant theoretical development is presented. While deuterium excess shows considerable promise as a quantitative index of precipitation processes, the effectiveness of current applications using GNIP is largely dependent on analytical uncertainty (∼2.1 per mille), which could be improved to better than 1 per mille through basic upgrades in routine measurement procedures for deuterium analysis. (author)

  15. Identification of source for excess methane in marine atmosphere over Arabian coast

    Kameswara Rao, D.; Jani, R. A.

    2012-12-01

    Systematic air sampling has been done on board 'Sagar Pachmi' in the costal region of Arabian Sea along the cruise track from Cochin to Goa during November 2010 to find the source for excess methane in marine atmosphere over Arabian Sea. Ambient air was collected into a 10L SS cylinders at 7bar pressure and also in a 200cc evacuated glass bulbs for isotopic and concentration measurements from a height of ~5 meters above Sea surface at different latitude intervals. The carbon isotopic composition (delta13C) in these samples were measured using dual inlet IRMS after methane is converted in to CO2 using conventional method. Methane concentrations in all these samples vary from 1880 to 1943 ppbv where as its delta13C value vary in a narrow range of -44.9 to -46.5‰. The CH4 concentrations are more than that of tropospheric values (1775 ppbv) over the costal waters of Arabian Sea from Kanyakumari to Mumbai during the month of November-December from 2003-2007. It is estimated that an excess methane of ~ 7 - 11% in these samples. In general, it is believed that CH4 concentrations in marine atmosphere are related to emissions from the Arabian Sea due to upwelling which brings methane rich water to the surface. The excess methane in these samples is either from methane rich water in surface waters or wind flown from land surface to sampling location. Our data suggests that excess methane must have come from land to Ocean surface since the wind direction is NE in these samples. Also, there is an increase of CH4 concentrations with increasing wind speed. This also indicates that CH4 emissions must have come from land surface. The values of delta13C of CH4 in these samples are enriched compared to that of from tropospheric value (-47.1‰) which indicates that the excess methane is thermogenic type and probably the methane must have come from land. We estimated the delta13C of CH4 for this excess methane and they vary between -39 to -25‰. It indicates that the source for

  16. Cardioprotective aspirin users and their excess risk of upper gastrointestinal complications

    García Rodríguez Luis A

    2006-09-01

    Full Text Available Abstract Background To balance the cardiovascular benefits from low-dose aspirin against the gastrointestinal harm caused, studies have considered the coronary heart disease risk for each individual but not their gastrointestinal risk profile. We characterized the gastrointestinal risk profile of low-dose aspirin users in real clinical practice, and estimated the excess risk of upper gastrointestinal complications attributable to aspirin among patients with different gastrointestinal risk profiles. Methods To characterize aspirin users in terms of major gastrointestinal risk factors (i.e., advanced age, male sex, prior ulcer history and use of non-steroidal anti-inflammatory drugs, we used The General Practice Research Database in the United Kingdom and the Base de Datos para la Investigación Farmacoepidemiológica en Atención Primaria in Spain. To estimate the baseline risk of upper gastrointestinal complications according to major gastrointestinal risk factors and the excess risk attributable to aspirin within levels of these factors, we used previously published meta-analyses on both absolute and relative risks of upper gastrointestinal complications. Results Over 60% of aspirin users are above 60 years of age, 4 to 6% have a recent history of peptic ulcers and over 13% use other non-steroidal anti-inflammatory drugs. The estimated average excess risk of upper gastrointestinal complications attributable to aspirin is around 5 extra cases per 1,000 aspirin users per year. However, the excess risk varies in parallel to the underlying gastrointestinal risk and might be above 10 extra cases per 1,000 person-years in over 10% of aspirin users. Conclusion In addition to the cardiovascular risk, the underlying gastrointestinal risk factors have to be considered when balancing harms and benefits of aspirin use for an individual patient. The gastrointestinal harms may offset the cardiovascular benefits in certain groups of patients where the

  17. Warm Dust around Cool Stars: Field M Dwarfs with WISE 12 or 22 Micron Excess Emission

    Theissen, Christopher A

    2014-01-01

    Using the SDSS DR7 spectroscopic catalog, we searched the WISE AllWISE catalog to investigate the occurrence of warm dust, as inferred from IR excesses, around field M dwarfs (dMs). We developed SDSS/WISE color selection criteria to identify 175 dMs (from 70,841) that show IR flux greater than typical dM photosphere levels at 12 and/or 22 $\\mu$m, including seven new stars within the Orion OB1 footprint. We characterize the dust populations inferred from each IR excess, and investigate the possibility that these excesses could arise from ultracool binary companions by modeling combined SEDs. Our observed IR fluxes are greater than levels expected from ultracool companions ($>3\\sigma$). We also estimate that the probability the observed IR excesses are due to chance alignments with extragalactic sources is $<$ 0.1%. Using SDSS spectra we measure surface gravity dependent features (K, Na, and CaH 3), and find $<$ 15% of our sample indicate low surface gravities. Examining tracers of youth (H$\\alpha$, UV fl...

  18. Chromosome aberrations produced by radiation: The relationship between excess acentric fragments and dicentrics

    Most chromosome aberrations produced by ionizing radiation develop from DNA double-strand breaks (DSBs). Published date on the yield and variance of excess acentric fragments after in vitro irradiation of human lymphocytes were compared with corresponding data on dicentrics. At low LET the number of excess acentric fragments is about 60% of the number of dicentrics, independent of dose and perhaps of dose rate, suggesting that dicentrics and excess acentric fragments arise from similar kinetics rather than from fundamentally different reactions. Only a weak dependence of the ratio on LET is observed. These results are quantified using generalizations of models for pairwise DSB interactions suggested by Brewen and Brock based on data for marsupial cells. By allowing singly incomplete and some open-quotes doubly incompleteclose quotes exchanges, the models can also account for the experimental observation that the dispersion for excess acentric fragments, a measure of cell-to-cell variance, is systematically larger than the dispersion for dicentrics. Numerical estimates of an incompleteness parameter are derived. 47 refs., 8 figs., 4 tabs

  19. Excess Specific Heat of Ptfe and Pctfe at Low Temperatures: Approximation Details

    Bogdanova, Nina B

    2008-01-01

    Approximation of the previously estimated excess specific heat C^{excess}/T^{5} of two fluoropolymers, PTFE and PCTFE, is presented using Orthonormal Polynomial Expansion Method (OPEM). The new type of weighting functions in OPEM involves the experimental errors in every point of the studied thermal characteristic. The investigated temperature dependence of the function C^{excess}/T^{5} is described in the whole temperature ranges 0.4\\div8 K and 2.5\\div7 K respectively for PTFE and PCTFE as well as in two subintervals (0.4\\div2) K, (2.5\\div8) K for PTFE. Numerical results of the deviations between the given C^{excess}/T^{5} data and their approximating values are given. The usual polynomial coefficients obtained by orthonormal ones in our OPEM approach and the calculated in every point absolute, relative and specific sensitivities of the studied thermal characteristic are proposed too. The approximation parameters of this type thermal characteristic are shown in Figures and Tables.

  20. Excess science accommodation capabilities and excess performance capabilities assessment for Mars Geoscience and Climatology Orbiter: Extended study

    Clark, K.; Flacco, A.; Kaskiewicz, P.; Lebsock, K.

    1983-01-01

    The excess science accommodation and excess performance capabilities of a candidate spacecraft bus for the Mars Geoscience and Climatology Orbiter MGCO mission are assessed. The appendices are included to support the conclusions obtained during this contract extension. The appendices address the mission analysis, the attitude determination and control, the propulsion subsystem, and the spacecraft configuration.

  1. Niche Genetic Algorithm with Accurate Optimization Performance

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  2. How accurately can we calculate thermal systems?

    The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)

  3. Accurate diagnosis is essential for amebiasis

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  4. Investigations on Accurate Analysis of Microstrip Reflectarrays

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  5. Molar excess enthalpies at T = 298.15 K for (1-alkanol + dibutylether) systems

    Mozo, Ismael; Garcia De La Fuente, Isaias [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain); Gonzalez, Juan Antonio, E-mail: jagl@termo.uva.e [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain); Cobos, Jose Carlos [G.E.T.E.F., Departamento de Fisica Aplicada, Facultad de Ciencias, Universidad de Valladolid, 47071 Valladolid (Spain)

    2010-01-15

    Molar excess enthalpies, H{sub m}{sup E}, at T = 298.15 K and atmospheric pressure have been measured using a microcalorimeter Tian-Calvet for the (methanol, 1-propanol, 1-butanol, 1-pentanol, 1-hexanol, 1-octanol, or 1-decanol + dibutylether) systems. Experimental results have been compared with those obtained from the ERAS, DISQUAC, and Dortmund UNIFAC models. DISQUAC and ERAS yield similar H{sub m}{sup E}, results. Larger differences between experimental and calculated H{sub m}{sup E}, values are obtained from UNIFAC. ERAS represents quite accurately the excess molar volumes, V{sub m}{sup E}, of these systems. The excess molar internal energy at constant volume, U{sub V,m}{sup E}, is nearly constant for the solutions with the longer 1-alkanols. This points out that the different interactional contributions to this magnitude are counterbalanced. Interactions between unlike molecules are stronger in methanol systems. The same behaviour is observed in mixtures with dipropylether.

  6. The e-index, complementing the h-index for excess citations.

    Chun-Ting Zhang

    Full Text Available BACKGROUND: The h-index has already been used by major citation databases to evaluate the academic performance of individual scientists. Although effective and simple, the h-index suffers from some drawbacks that limit its use in accurately and fairly comparing the scientific output of different researchers. These drawbacks include information loss and low resolution: the former refers to the fact that in addition to h(2 citations for papers in the h-core, excess citations are completely ignored, whereas the latter means that it is common for a group of researchers to have an identical h-index. METHODOLOGY/PRINCIPAL FINDINGS: To solve these problems, I here propose the e-index, where e(2 represents the ignored excess citations, in addition to the h(2 citations for h-core papers. Citation information can be completely depicted by using the h-index together with the e-index, which are independent of each other. Some other h-type indices, such as a and R, are h-dependent, have information redundancy with h, and therefore, when used together with h, mask the real differences in excess citations of different researchers. CONCLUSIONS/SIGNIFICANCE: Although simple, the e-index is a necessary h-index complement, especially for evaluating highly cited scientists or for precisely comparing the scientific output of a group of scientists having an identical h-index.

  7. Diagnostic accuracy of the defining characteristics of the excessive fluid volume diagnosis in hemodialysis patients

    Maria Isabel da Conceição Dias Fernandes

    2015-12-01

    Full Text Available Objective: to evaluate the accuracy of the defining characteristics of the excess fluid volume nursing diagnosis of NANDA International, in patients undergoing hemodialysis. Method: this was a study of diagnostic accuracy, with a cross-sectional design, performed in two stages. The first, involving 100 patients from a dialysis clinic and a university hospital in northeastern Brazil, investigated the presence and absence of the defining characteristics of excess fluid volume. In the second step, these characteristics were evaluated by diagnostic nurses, who judged the presence or absence of the diagnosis. To analyze the measures of accuracy, sensitivity, specificity, and positive and negative predictive values were calculated. Approval was given by the Research Ethics Committee under authorization No. 148.428. Results: the most sensitive indicator was edema and most specific were pulmonary congestion, adventitious breath sounds and restlessness. Conclusion: the more accurate defining characteristics, considered valid for the diagnostic inference of excess fluid volume in patients undergoing hemodialysis were edema, pulmonary congestion, adventitious breath sounds and restlessness. Thus, in the presence of these, the nurse may safely assume the presence of the diagnosis studied.

  8. Prevalence of excessive screen time and associated factors in adolescents

    Joana Marcela Sales de Lucena

    2015-12-01

    Full Text Available Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyzed were: sociodemographic (gender, age, economic class, and skin color, physical activity and nutritional status of adolescents. Results: The prevalence of excessive screen time was 79.5% (95%CI 78.1-81.1 and it was higher in males (84.3% compared to females (76.1%; p<0.001. In multivariate analysis, adolescent males, those aged 14-15 year old and the highest economic class had higher chances of exposure to excessive screen time. The level of physical activity and nutritional status of adolescents were not associated with excessive screen time. Conclusions: The prevalence of excessive screen time was high and varied according to sociodemographic characteristics of adolescents. It is necessary to develop interventions to reduce the excessive screen time among adolescents, particularly in subgroups with higher exposure.

  9. Land application for disposal of excess water: an overview

    Water management is an important factor in the operation of uranium mines in the Alligator Rivers Region, located in the Wet-Dry tropics. For many project designs, especially open cut operations, sole reliance on evaporative disposal of waste water is ill-advised in years where the Wet season is above average. Instead, spray irrigation, or the application of excess water to suitable areas of land, has been practised at both Nabarlek and Ranger. The method depends on water losses by evaporation from spray droplets, from vegetation surfaces and from the ground surface; residual water is carried to the groundwater system by percolation. The solutes are largely transferred to the soils where heavy metals and metallic radionuclides attach to particles in the soil profile with varying efficiency depending on soil type. Major solutes that can occur in waste water from uranium mines are less successfully immobilised in soil. Sulphate is essentially conservative and not bound within the soil profile; ammonia is affected by soil reactions leading to its decomposition. The retrospective viewpoint of history indicates the application of a technology inadequately researched for local conditions. The consequences at Nabarlek have been the death of trees on one application area and the creation of contaminated groundwater which has moved into the biosphere down gradient and affected the ecology of a local stream. At Ranger, the outcome of land application has been less severe in the short term but the effective adsorption of radionuclides in surface soils has lead to dose estimates which will necessitate restrictions on future public access unless extensive rehabilitation is carried out. 2 refs., 1 tab

  10. Excess of (236)U in the northwest Mediterranean Sea.

    Chamizo, E; López-Lora, M; Bressac, M; Levy, I; Pham, M K

    2016-09-15

    In this work, we present first (236)U results in the northwestern Mediterranean. (236)U is studied in a seawater column sampled at DYFAMED (Dynamics of Atmospheric Fluxes in the Mediterranean Sea) station (Ligurian Sea, 43°25'N, 07°52'E). The obtained (236)U/(238)U atom ratios in the dissolved phase, ranging from about 2×10(-9) at 100m depth to about 1.5×10(-9) at 2350m depth, indicate that anthropogenic (236)U dominates the whole seawater column. The corresponding deep-water column inventory (12.6ng/m(2) or 32.1×10(12) atoms/m(2)) exceeds by a factor of 2.5 the expected one for global fallout at similar latitudes (5ng/m(2) or 13×10(12) atoms/m(2)), evidencing the influence of local or regional (236)U sources in the western Mediterranean basin. On the other hand, the input of (236)U associated to Saharan dust outbreaks is evaluated. An additional (236)U annual deposition of about 0.2pg/m(2) based on the study of atmospheric particles collected in Monaco during different Saharan dust intrusions is estimated. The obtained results in the corresponding suspended solids collected at DYFAMED station indicate that about 64% of that (236)U stays in solution in seawater. Overall, this source accounts for about 0.1% of the (236)U inventory excess observed at DYFAMED station. The influence of the so-called Chernobyl fallout and the radioactive effluents produced by the different nuclear installations allocated to the Mediterranean basin, might explain the inventory gap, however, further studies are necessary to come to a conclusion about its origin. PMID:27262827

  11. Accurate radiative transfer calculations for layered media.

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  12. Accurate basis set truncation for wavefunction embedding

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  13. Accurate determination of characteristic relative permeability curves

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  14. Accurate shear measurement with faint sources

    Zhang, Jun; Foucaud, Sebastien [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 955 Jianchuan road, Shanghai, 200240 (China); Luo, Wentao, E-mail: betajzhang@sjtu.edu.cn, E-mail: walt@shao.ac.cn, E-mail: foucaud@sjtu.edu.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030 (China)

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  15. Mortality Attributable to Excess Body Mass Index in Iran: Implementation of the Comparative Risk Assessment Methodology

    Djalalinia, Shirin; Moghaddam, Sahar Saeedi; Peykari, Niloofar; Kasaeian, Amir; Sheidaei, Ali; Mansouri, Anita; Mohammadi, Younes; Parsaeian, Mahboubeh; Mehdipour, Parinaz; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Background: The prevalence of obesity continues to rise worldwide with alarming rates in most of the world countries. Our aim was to compare the mortality of fatal disease attributable to excess body mass index (BMI) in Iran in 2005 and 2011. Methods: Using standards implementation comparative risk assessment methodology, we estimated mortality attributable to excess BMI in Iranian adults of 25–65 years old, at the national and sub-national levels for 9 attributable outcomes including; ischemic heart diseases (IHDs), stroke, hypertensive heart diseases, diabetes mellitus (DM), colon cancer, cancer of the body of the uterus, breast cancer, kidney cancer, and pancreatic cancer. Results: In 2011, in adults of 25–65 years old, at the national level, excess BMI was responsible for 39.5% of total deaths that were attributed to 9 BMI paired outcomes. From them, 55.0% were males. The highest mortality was attributed to IHD (55.7%) which was followed by stroke (19.3%), and DM (12.0%). Based on the population attributed fractions estimations of 2011, except for colon cancer, the remaining 6 common outcomes were higher for women than men. Conclusions: Despite the priority of the problem, there is currently no comprehensive program to prevention or control obesity in Iran. The present results show a growing need to comprehensive implications for national and sub-national health policies and interventional programs in Iran. PMID:26644906

  16. Spitzer 24 um Excesses for Bright Galactic Stars in Bootes and First Look Survey Fields

    Hovhannisyan, L R; Weedman, D W; Le Floc'h, E; Houck, J R; Soifer, B T; Brand, K; Dey, A; Jannuzi, B T

    2009-01-01

    Optically bright Galactic stars (V 1 mJy are identified in Spitzer mid-infrared surveys within 8.2 square degrees for the Bootes field of the NOAO Deep Wide-Field Survey and within 5.5 square degrees for the First Look Survey (FLS). 128 stars are identified in Bootes and 140 in the FLS, and their photometry is given. (K-[24]) colors are determined using K magnitudes from the 2MASS survey for all stars in order to search for excess 24 um luminosity compared to that arising from the stellar photosphere. Of the combined sample of 268 stars, 141 are of spectral types F, G, or K, and 17 of these 141 stars have 24 um excesses with (K-[24]) > 0.2 mag. Using limits on absolute magnitude derived from proper motions, at least 8 of the FGK stars with excesses are main sequence stars, and estimates derived from the distribution of apparent magnitudes indicate that all 17 are main sequence stars. These estimates lead to the conclusion that between 9% and 17% of the main sequence FGK field stars in these samples have 24 u...

  17. The 750 GeV diphoton excess and SUSY

    Heinemeyer, S.

    2016-01-01

    The LHC experiments ATLAS and CMS have reported an excess in the diphoton spectrum at \\sim 750 GeV. At the same time the motivation for Supersymmetry (SUSY) remains unbowed. Consequently, we review briefly the proposals to explain this excess in SUSY, focusing on "pure" (N)MSSM solutions. We then review in more detail a proposal to realize this excess within the NMSSM. In this particular scenario a Higgs boson with mass around 750 GeV decays to two light pseudo-scalar Higgs bosons. Via mixing...

  18. The soft X-ray excess in Einstein quasar spectra

    Masnou, J. L.; Wilkes, B. J.; Elvis, M.; Mcdowell, J. C.; Arnaud, K. A.

    1992-01-01

    An SNR-limited subsample of 14 quasars from the Wilkes and Elvis (1987) sample is presently investigated for low-energy excess above a high-energy power law in the X-ray spectra obtained by the Einstein Imaging Proportional Counter. A significant excess that is 1-6 times as strong as the high-energy component at 0.2 keV is noted in eight of the 14 objects. In the case of 3C273, multiple observations show the excess to be variable.

  19. 750 GeV Diphoton Excess from Cascade Decay

    Huang, Fa Peng; Liu, Ze Long; Wang, Yan

    2015-01-01

    Motivated by the recent 750 GeV diphoton excess observed by the ATLAS and CMS collaborations, we propose a simplified model to explain this excess. Model-independent constraints and predictions on the allowed couplings for generating the observed diphoton excess are studied in detail, and the compatibility between Run 1 and Run 2 data is considered simultaneously. We demonstrate that the possible four photon signal can be used to test this scenario, and also explain the interesting deviation for a diphoton mass of about 1.6 TeV by ATLAS, where the local significance is 2.8 $\\sigma$.

  20. Thermodynamic properties of binary mixtures of tetrahydropyran with pyridine and isomeric picolines: Excess molar volumes, excess molar enthalpies and excess isentropic compressibilities

    Research highlights: → Densities, ρ and speeds of sound, u of tetrahydropyran (i) + pyridine or α-, β- or γ-picoline (j) binary mixtures at 298.15, 303.15 and 308.15 K and excess molar enthalpies, HE of the same set of mixtures at 308.15 K have been measured as a function of composition. → The observed densities and speeds of sound values have been employed to determine excess molar volumes, VE and excess isentropic compressibilities, κSE. → Topology of the constituents of mixtures has been utilized (Graph theory) successfully to predict VE, HE and κSE data of the investigated mixtures. → Thermodynamic data of the various mixtures have also been analyzed in terms of Prigogine-Flory-Patterson (PFP) theory. - Abstract: Densities, ρ and speeds of sound, u of tetrahydropyran (i) + pyridine or α-, β- or γ- picoline (j) binary mixtures at 298.15, 303.15 and 308.15 K and excess molar enthalpies, HE of the same set of mixtures at 308.15 K have been measured as a function of composition using an anton Parr vibrating-tube digital density and sound analyzer (model DSA 5000) and 2-drop micro-calorimeter, respectively. The resulting density and speed of sound data of the investigated mixtures have been utilized to predict excess molar volumes, VE and excess isentropic compressibilities, κSE. The observed data have been analyzed in terms of (i) Graph theory; (ii) Prigogine-Flory-Patterson theory. It has been observed that VE, HE and κSE data predicted by Graph theory compare well with their experimental values.

  1. Accurate LAI retrieval method based on PROBA/CHRIS data

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  2. Accurate characterisation of post moulding shrinkage of polymer parts

    Neves, L. C.; De Chiffre, L.; González-Madruga, D.;

    2015-01-01

    The work deals with experimental determination of the shrinkage of polymer parts after injection moulding. A fixture for length measurements on 8 parts at the same time was designed and manufactured in Invar, mounted with 8 electronic gauges, and provided with 3 temperature sensors. The fixture was...... used to record the length at a well-defined position on each part continuously, starting from approximately 10 minutes after moulding and covering a time period of 7 days. Two series of shrinkage curves were analysed and length values after stabilisation extracted and compared for all 16 parts. Values...... were compensated with respect to the effect from temperature variations during the measurements. Prediction of the length after stabilisation was carried out by fitting data at different stages of shrinkage. Uncertainty estimations were carried out and a procedure for the accurate characterisation of...

  3. How accurately can 21cm tomography constrain cosmology?

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  4. Prospects for Measuring the Positron Excess with the Cherenkov Telescope Array

    Vandenbroucke, Justin; Wood, Matthew; Colin, Pierre

    2015-01-01

    The excess of positrons in cosmic rays above $\\sim$10 GeV has been a puzzle since it was discovered. Possible interpretations of the excess have been suggested, including acceleration in a local supernova remnant or annihilation of dark matter particles. To discriminate between these scenarios, the positron fraction must be measured at higher energies. One technique to perform this measurement is using the Earth-Moon spectrometer: observing the deflection of positron and electron moon shadows by the Earth's magnetic field. The measurement has been attempted by previous imaging atmospheric Cherenkov telescopes without success. The Cherenkov Telescope Array (CTA) will have unprecedented sensitivity and background rejection that could make this measurement successful for the first time. In addition, the possibility of using silicon photomultipliers in some of the CTA telescopes could greatly increase the feasibility of making observations near the moon. Estimates of the capabilities of CTA to measure the positro...

  5. Protective effect of D-ribose against inhibition of rats testes function at excessive exercise

    Chigrinskiy E.A.

    2011-09-01

    Full Text Available An increasing number of research studies point to participation in endurance exercise training as having significant detrimental effects upon reproductive hormonal profiles in men. The means used for prevention and correction of fatigue are ineffective for sexual function recovery and have contraindications and numerous side effects. The search for substances effectively restoring body functions after overtraining and at the same time sparing the reproductive function, which have no contraindications precluding their long and frequent use, is an important trend of studies. One of the candidate substances is ribose used for correction of fatigue in athletes engaged in some sports.We studied the role of ribose deficit in metabolism of the testes under conditions of excessive exercise and the potentialities of ribose use for restoration of the endocrine function of these organs.45 male Wistar rats weighing 240±20 g were used in this study. Animals were divided into 3 groups (n=15: control; excessive exercise; excessive exercise and received ribose treatment. Plasma concentrations of lactic, β-hydroxybutyric, uric acids, luteinizing hormone, total and free testosterone were measured by biochemical and ELISA methods. The superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase and glucose-6-phosphate dehydrogenase activities and uric acids, malondialdehyde, glutathione, ascorbic acids, testosterone levels were estimated in the testes sample.Acute disorders of purine metabolism develop in rat testes under conditions of excessive exercise. These disorders are characterized by enhanced catabolism and reduced reutilization of purine mononucleotides and activation of oxidative stress against the background of reduced activities of the pentose phosphate pathway and antioxidant system. Administration of D-ribose to rats subjected to excessive exercise improves purine reutilization, stimulates the pentose phosphate pathway work

  6. A Bayesian Framework for Combining Valuation Estimates

    Yee, Kenton K

    2007-01-01

    Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.

  7. Excess Molar Volume of Binary Systems Containing Mesitylene

    Morávková, L.

    2013-05-01

    Full Text Available This paper presents a review of density measurements for binary systems containing 1,3,5-trimethylbenzene (mesitylene with a variety of organic compounds at atmospheric pressure. Literature data of the binary systems were divided into nine basic groups by the type of contained organic compound with mesitylene. The excess molar volumes calculated from the experimental density values have been compared with literature data. Densities were measured by a few experimental methods, namely using a pycnometer, a dilatometer or a commercial apparatus. The overview of the experimental data and shape of the excess molar volume curve versus mole fraction is presented in this paper. The excess molar volumes were correlated by Redlich–Kister equation. The standard deviations for fitting of excess molar volume versus mole fraction are compared. Found literature data cover a huge temperature range from (288.15 to 343.15 K.

  8. A Little Excess Weight May Boost Colon Cancer Survival

    ... 158930.html A Little Excess Weight May Boost Colon Cancer Survival Researchers saw an effect, but experts stress ... a surprise, a new study found that overweight colon cancer patients tended to have better survival than their ...

  9. Gene Linked to Excess Male Hormones in Female Infertility Disorder

    ... News Releases News Release Tuesday, April 15, 2014 Gene linked to excess male hormones in female infertility ... form cyst-like structures. A variant in a gene active in cells of the ovary may lead ...

  10. Passive samplers accurately predict PAH levels in resident crayfish.

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4±1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  11. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  12. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  13. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  14. Accurate valence band width of diamond

    An accurate width is determined for the valence band of diamond by imaging photoelectron momentum distributions for a variety of initial- and final-state energies. The experimental result of 23.0±0.2 eV2 agrees well with first-principles quasiparticle calculations (23.0 and 22.88 eV) and significantly exceeds the local-density-functional width, 21.5±0.2 eV2. This difference quantifies effects of creating an excited hole state (with associated many-body effects) in a band measurement vs studying ground-state properties treated by local-density-functional calculations. copyright 1997 The American Physical Society

  15. Generalized estimating equations

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  16. Accurate Weather Forecasting for Radio Astronomy

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  17. A Novel Algorithm for the Estimation of the Surfactant Surface Excess at Emulsion Interfaces

    Urbina-Villalba, German; Di Scipio, Sabrina; Garcia-Valera, Neyda

    2014-01-01

    When the theoretical values of the interfacial tension -resulting from the homogeneous distribution of ionic surfactant molecules amongst the interface of emulsion drops- are plotted against the total surfactant concentration, they produce a curve comparable to the Gibbs adsorption isotherm. However, the actual isotherm takes into account the solubility of the surfactant in the aqueous bulk phase. Hence, assuming that the total surfactant population is only distributed among the available oil...

  18. Variables of excessive computer internet use in childhood and adolescence

    Thalemann, Ralf

    2010-01-01

    The aim of this doctoral thesis is the characterization of excessive computer and video gaming in terms of a behavioral addiction. Therefore, the development of a diagnostic psychometric instrument was central to differentiate between normal and pathological computer gaming in adolescence. In study 1, 323 children were asked about their video game playing behavior to assess the prevalence of pathological computer gaming. Data suggest that excessive computer and video game players use thei...

  19. When does the mean excess plot look linear?

    Ghosh, Souvik; Resnick, Sidney I.

    2010-01-01

    In risk analysis, the mean excess plot is a commonly used exploratory plotting technique for confirming iid data is consistent with a generalized Pareto assumption for the underlying distribution, since in the presence of such a distribution thresholded data have a mean excess plot that is roughly linear. Does any other class of distributions share this linearity of the plot? Under some extra assumptions, we are able to conclude that only the generalized Pareto family has this property.

  20. Diphoton excess in phenomenological spin-2 resonance scenarios

    Martini, Antony; Mawatari, Kentarou; Sengupta, Dipan

    2016-04-01

    We provide a possible explanation of a 750 GeV diphoton excess recently reported by both the ATLAS and CMS collaborations in the context of phenomenological spin-2 resonance scenarios, where the independent effective couplings of the resonance with gluons, quarks and photons are considered. We find a parameter region where the excess can be accounted for without conflicting with dijet constraints. We also show that the kinematical distributions might help to determine the couplings to gluons and quarks.

  1. ATLAS on-Z excess through vector-like quarks

    Endo, Motoi; Takaesu, Yoshitaro

    2016-07-01

    We investigate the possibility that the excess observed in the leptonic- Z + jets +E̸T ATLAS SUSY search is due to productions of vector-like quarks U, which decay to the first-generation quarks and Z bosons. We find that the excess can be explained within the 2σ (up to 1.4σ) level with satisfying the constraints from the other LHC searches. The mass and branching ratio are 610 0.3- 0.45, respectively.

  2. Fetal Programming of Obesity: Maternal Obesity and Excessive Weight Gain

    Seray Kabaran

    2014-01-01

    The prevalence of obesity is an increasing health problem throughout the world. Maternal pre-pregnancy weight, maternal nutrition and maternal weight gain are among the factors that can cause childhood obesity. Both maternal obesity and excessive weight gain increase the risks of excessive fetal weight gain and high birth weight. Rapid weight gain during fetal period leads to changes in the newborn body composition. Specifically, the increase in body fat ratio in the early periods is associat...

  3. Rent Seeking and the Excess Burden of Taxation

    Kahana, Nava; Klunover, Doron

    2014-01-01

    The social costs of rent seeking are generally evaluated with respect to rent dissipation. A common assumption is complete rent dissipation so that the value of a contested rent is the value of social loss. When rent seekers earn taxable income, there is interdependence between the social cost of rent seeking through rent dissipation and the excess burden of taxation. Through the addition of substitution to rent seeking beyond leisure, rent seeking increases the excess burden of taxation unde...

  4. Excess turnover and employment growth: firm and match heterogeneity

    Centeno, Mário; Machado, Carla; Novo, Álvaro A.

    2009-01-01

    Portuguese firms engage in intense reallocation, most employers simultaneously hire and separate from workers, resulting in a large heterogeneity of flows and excess turnover. Large and older firms have lower flows, but high excess turnover rates. In small firms, hires and separations move symmetrically during expansion and contraction periods, on the contrary, large firms adjust their employment levels by reducing entry and not by increasing separations. Most hires and separations are on fix...

  5. Limiting excessive postoperative blood transfusion after cardiac procedures. A review.

    Ferraris, V A; Ferraris, S P

    1995-01-01

    Analysis of blood product use after cardiac operations reveals that a few patients ( 80%). The risk factors that predispose a minority of patients to excessive blood use include patient-related factors, transfusion practices, drug-related causes, and procedure-related factors. Multivariate studies suggest that patient age and red blood cell volume are independent patient-related variables that predict excessive blood product transfusion aft...

  6. Dynamics of excess electrons in atomic and molecular clusters

    Young, Ryan Michael

    2011-01-01

    Femtosecond time-resolved photoelectron imaging (TRPEI) is applied to the study of excess electrons in clusters as well as to microsolvated anion species. This technique can be used to perform explicit time-resolved as well as one-color (single- or multiphoton) studies on gas phase species. The first part of this dissertation details time-resolved studies done on atomic clusters with an excess electron, the excited-state dynamics of solvated molecular anions, and charge-transfer dynamics to...

  7. Asymmetric Dark Matter Models and the LHC Diphoton Excess

    Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    The existence of dark matter (DM) and the origin of the baryon asymmetry are persistent indications that the SM is incomplete. More recently, the ATLAS and CMS experiments have observed an excess of diphoton events with invariant mass of about 750 GeV. One interpretation of this excess is decays...... have for models of asymmetric DM that attempt to account for the similarity of the dark and visible matter abundances....

  8. Diphoton excess in phenomenological spin-2 resonance scenarios

    Martini, Antony; Sengupta, Dipan

    2016-01-01

    We provide a possible explanation of a 750 GeV diphoton excess recently reported by both the ATLAS and CMS collaborations in the context of phenomenological spin-2 resonance scenarios, where the independent effective couplings of the resonance with gluons, quarks and photons are considered. We find a parameter region where the excess can be accounted for without conflicting with dijet constraints. We also show that the kinematical distributions might help to determine the couplings to gluons and quarks.

  9. Excess pore water pressure induced in the foundation of a tailings dyke at Muskeg River Mine, Fort McMurray

    Eshraghian, A.; Martens, S. [Klohn Crippen Berger Ltd., Calgary, AB (Canada)

    2010-07-01

    This paper discussed the effect of staged construction on the generation and dissipation of excess pore water pressure within the foundation clayey units of the External Tailings Facility dyke. Data were compiled from piezometers installed within the dyke foundation and used to estimate the dissipation parameters for the clayey units for a selected area of the foundation. Spatial and temporal variations in the pore water pressure generation parameters were explained. Understanding the process by which excess pore water pressure is generated and dissipates is critical to optimizing dyke design and performance. Piezometric data was shown to be useful in improving estimates of the construction-induced pore water pressure and dissipation rates within the clay layers in the foundation during dyke construction. In staged construction, a controlled rate of load application is used to increase foundation stability. Excess pore water pressure dissipates after each application, so the most critical stability condition happens after each load. Slow loading allows dissipation, whereas previous load pressure remains during fast loading. The dyke design must account for the rate of loading and the rate of pore pressure dissipation. Controlling the rate of loading and the rate of stress-induced excess pore water pressure generation is important to dyke stability during construction. Effective stress-strength parameters for the foundation require predictions of the pore water pressure induced during staged construction. It was found that both direct and indirect loading generates excess pore water pressure in the foundation clays. 2 refs., 2 tabs., 11 figs.

  10. Distribution system state estimation

    Wang, Haibin

    With the development of automation in distribution systems, distribution SCADA and many other automated meters have been installed on distribution systems. Also Distribution Management System (DMS) have been further developed and more sophisticated. It is possible and useful to apply state estimation techniques to distribution systems. However, distribution systems have many features that are different from the transmission systems. Thus, the state estimation technology used in the transmission systems can not be directly used in the distribution systems. This project's goal was to develop a state estimation algorithm suitable for distribution systems. Because of the limited number of real-time measurements in the distribution systems, the state estimator can not acquire enough real-time measurements for convergence, so pseudo-measurements are necessary for a distribution system state estimator. A load estimation procedure is proposed which can provide estimates of real-time customer load profiles, which can be treated as the pseudo-measurements for state estimator. The algorithm utilizes a newly installed AMR system to calculate more accurate load estimations. A branch-current-based three-phase state estimation algorithm is developed and tested. This method chooses the magnitude and phase angle of the branch current as the state variable, and thus makes the formulation of the Jacobian matrix less complicated. The algorithm decouples the three phases, which is computationally efficient. Additionally, the algorithm is less sensitive to the line parameters than the node-voltage-based algorithms. The algorithm has been tested on three IEEE radial test feeders, both the accuracy and the convergence speed. Due to economical constraints, the number of real-time measurements that can be installed on the distribution systems is limited. So it is important to decide what kinds of measurement devices to install and where to install them. Some rules of meter placement based

  11. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  12. Excess Molar Volumes and Excess Molar Enthalpies in Binary Systems N-alkyl-triethylammonium bis(trifluoromethylsulfonyl)imide + Methanol

    Machanová, Karolina; Troncoso, J.; Jacquemin, J.; Bendová, Magdalena

    2014-01-01

    Roč. 363, FEB 15 (2014), s. 156-166. ISSN 0378-3812 Institutional support: RVO:67985858 Keywords : ionic liquids * excess properties * binary mixtures Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.200, year: 2014

  13. Excess Barium as a Paleoproductivity Proxy: A Reevaluation

    Eagle, M.; Paytan, A.

    2001-12-01

    Marine barite may serve as a proxy to reconstruct past export production (Dymond, 1992). In most studies sedimentary barite accumulation is not measured directly, instead a parameter termed excess barium (Baexs), also referred to as biogenic barium, is used to estimate the barite content. Baexs is defined as the total Ba concentration in the sediment minus the Ba associated with terrigenous material. Baexs is calculated by normalization to a constant Ba/Al ratio, typically the average shale ratio. This application assumes that (1) all the Ba besides the fraction associated with terrigenous Al is in the form of barite (the phase related to productivity) (2) the Ba/Alshale is constant in space and time (3) all of the Al is associated with terrigenous matter. If these assumptions are invalidated however, this approach lead to significant errors in calculating export production rates. To test the validity of the use of Baexs as a proxy for barite we compared the Baexs in a wide range of core top sediments from different oceanic settings to the barite content in the same cores. We found that Baexs frequently overestimated the Ba fraction associated with barite and in several cases significant Baexs was measured in the cores where no barite was observed. We have also used a sequential leaching protocol (Collier and Edmond 1984) to determine Ba association with organic matter, carbonates, Fe-Mn hydroxides and silicates. While terrigenous Ba remains an important fraction, in our samples 25-95% of non-barite Ba was derived from other fractions, with Fe-Mn oxides contributing the most Ba. In addition we found that the Ba/Al ratio in the silicate fraction of our samples varied considerably from site to site. The above results suggest that at least two of the underlying assumptions for employing Baexs to reconstruct paleoproductivity are not always valid and previously published data from (Murray and Leinen 1993) indicate that the third assumption may also not hold in every

  14. Accurate geolocation of rfi sources in smos imagery based on superresolution algorithms

    Hyuk, Park; Camps Carmona, Adriano José; González Gambau, Veronica

    2014-01-01

    Accurate geolocation of SMOS RFI sources is very important for effectively switching-off the illegal emitter in the protected L-band. We present a novel approach for the geolocation of SMOS RFI sources, based on the Direction of Arrival (DOA) estimation techniques normally used in the sensor array. The MUSIC DOA estimation algorithm is tailored for SMOS RFI source detection. In the test results, the proposed MUSIC method shows improved performance in terms of angular/spatial resolution.

  15. Parameter Estimation

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian;

    2011-01-01

    In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to...... generate a set of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  16. Accurate measurement of oxygen consumption in children undergoing cardiac catheterization.

    Li, Jia

    2013-01-01

    Oxygen consumption (VO(2) ) is an important part of hemodynamics using the direct Fick principle in children undergoing cardiac catheterization. Accurate measurement of VO(2) is vital. Obviously, any error in the measurement of VO(2) will translate directly into an equivalent percentage under- or overestimation of blood flows and vascular resistances. It remains common practice to estimate VO(2) values from published predictive equations. Among these, the LaFarge equation is the most commonly used equation and gives the closest estimation with the least bias and limits of agreement. However, considerable errors are introduced by the LaFarge equation, particularly in children younger than 3 years of age. Respiratory mass spectrometry remains the "state-of-the-art" method, allowing highly sensitive, rapid and simultaneous measurement of multiple gas fractions. The AMIS 2000 quadrupole respiratory mass spectrometer system has been adapted to measure VO(2) in children under mechanical ventilation with pediatric ventilators during cardiac catheterization. The small sampling rate, fast response time and long tubes make the equipment a unique and powerful tool for bedside continuous measurement of VO(2) in cardiac catheterization for both clinical and research purposes. PMID:22488802

  17. Accurate fission data for nuclear safety

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  18. Automatic classification and accurate size measurement of blank mask defects

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  19. Occurrence of invasive pneumococcal disease and number of excess cases due to influenza

    Penttinen Pasi

    2006-03-01

    Full Text Available Abstract Background Influenza is characterized by seasonal outbreaks, often with a high rate of morbidity and mortality. It is also known to be a cause of significant amount secondary bacterial infections. Streptococcus pneumoniae is the main pathogen causing secondary bacterial pneumonia after influenza and subsequently, influenza could participate in acquiring Invasive Pneumococcal Disease (IPD. Methods In this study, we aim to investigate the relation between influenza and IPD by estimating the yearly excess of IPD cases due to influenza. For this purpose, we use influenza periods as an indicator for influenza activity as a risk factor in subsequent analysis. The statistical modeling has been made in two modes. First, we constructed two negative binomial regression models. For each model, we estimated the contribution of influenza in the models, and calculated number of excess number of IPD cases. Also, for each model, we investigated several lag time periods between influenza and IPD. Secondly, we constructed an "influenza free" baseline, and calculated differences in IPD data (observed cases and baseline (expected cases, in order to estimate a yearly additional number of IPD cases due to influenza. Both modes were calculated using zero to four weeks lag time. Results The analysis shows a yearly increase of 72–118 IPD cases due to influenza, which corresponds to 6–10% per year or 12–20% per influenza season. Also, a lag time of one to three weeks appears to be of significant importance in the relation between IPD and influenza. Conclusion This epidemiological study confirms the association between influenza and IPD. Furthermore, negative binomial regression models can be used to calculate number of excess cases of IPD, related to influenza.

  20. The Excess Liquidity of the Open Economy and its Management

    Yonghong TU

    2011-01-01

    Full Text Available The excess liquidity of the open economy has become one main factor influencing the monetary markets, financial markets and even the whole macroeconomic. In era of the post-crisis, many countries have implemented the loose monetary policies, especially the quantitative easing policy in the U.S. which worsened the situation of the excess liquidity. Under this background, it will be more meaningful to study the excess liquidity of the open economy and its management for the developing countries’ economic recovery and development, inflation control, economic structural adjustment and optimization and the stability of the social economy. This paper starts by deep study of the related theories of the excess liquidity and the transmission mechanisms and then has an analysis on the current situation and cause of the excess liquidity in the BRICs which is taken as the representative for the developing countries. And then it comes up with the point that the main cause of the excess liquidity in the developing countries is the financial system, including loose monetary policies, financial innovation, petrodollar, East Asia dollar, US dollar hegemony, overcapacity, trade supply, savings supply and the surge of foreign exchange reserves etc. With the help of the Impulse response model from the VAR model, this paper analyzed on the impact of global liquidity surge to America, the euro zone, Japan, China, India, Russia, Brazil etc. and came to a conclusion: 1. The global excess liquidity keeps increasing. And its speed in the developing countries is fast while slow in the developed countries. 2. The spillover effect of the global excess liquidity spreads mainly through GDP and price. And for most countries, the international factor has more influence on the rising price than the domestic factor. Besides, the GDP has also been affected to fast grow which in turn becomes the main driving force of quantity increase of money. 3. The openness and development

  1. Reference Crop Evapotranspiration estimation using Artificial Neural Networks

    Chowdhary Archana; Shrivastava R.K.

    2010-01-01

    Improved water management requires accurate scheduling of irrigation, which in turn requires an accurate estimation of crop evapotranspiration. Crop coefficients are used to estimate crop evapotranspiration from weather based reference evapotranspiration. Reference evapotranspiration is an important quantity for computing the irrigation demands for various crops. Monthly reference evapotranspiration are estimated by FAO Penman-Monteith method and irrigation requirements for the system are est...

  2. Investigation of excess thyroid cancer incidence in Los Alamos County

    Los Alamos County (LAC) is home to the Los Alamos National Laboratory, a U.S. Department of Energy (DOE) nuclear research and design facility. In 1991, the DOE funded the New Mexico Department of Health to conduct a review of cancer incidence rates in LAC in response to citizen concerns over what was perceived as a large excess of brain tumors and a possible relationship to radiological contaminants from the Laboratory. The study found no unusual or alarming pattern in the incidence of brain cancer, however, a fourfold excess of thyroid cancer was observed during the late-1980's. A rapid review of the medical records for cases diagnosed between 1986 and 1990 failed to demonstrate that the thyroid cancer excess had resulted from enhanced detection. Surveillance activities subsequently undertaken to monitor the trend revealed that the excess persisted into 1993. A feasibility assessment of further studies was made, and ultimately, an investigation was conducted to document the epidemiologic characteristics of the excess in detail and to explore possible causes through a case-series records review. Findings from the investigation are the subject of this report

  3. Waterlike structural and excess entropy anomalies in liquid beryllium fluoride.

    Agarwal, Manish; Chakravarty, Charusita

    2007-11-22

    The relationship between structural order metrics and the excess entropy is studied using the transferable rigid ion model (TRIM) of beryllium fluoride melt, which is known to display waterlike thermodynamic anomalies. The order map for liquid BeF2, plotted between translational and tetrahedral order metrics, shows a structurally anomalous regime, similar to that seen in water and silica melt, corresponding to a band of state points for which average tetrahedral (q(tet)) and translational (tau) order are strongly correlated. The tetrahedral order parameter distributions further substantiate the analogous structural properties of BeF2, SiO2, and H2O. A region of excess entropy anomaly can be defined within which the pair correlation contribution to the excess entropy (S2) shows an anomalous rise with isothermal compression. Within this region of anomalous entropy behavior, q(tet) and S2 display a strong negative correlation, indicating the connection between the thermodynamic and the structural anomalies. The existence of this region of excess entropy anomaly must play an important role in determining the existence of diffusional and mobility anomalies, given the excess entropy scaling of transport properties observed in many liquids. PMID:17963376

  4. Investigation of excess thyroid cancer incidence in Los Alamos County

    Athas, W.F.

    1996-04-01

    Los Alamos County (LAC) is home to the Los Alamos National Laboratory, a U.S. Department of Energy (DOE) nuclear research and design facility. In 1991, the DOE funded the New Mexico Department of Health to conduct a review of cancer incidence rates in LAC in response to citizen concerns over what was perceived as a large excess of brain tumors and a possible relationship to radiological contaminants from the Laboratory. The study found no unusual or alarming pattern in the incidence of brain cancer, however, a fourfold excess of thyroid cancer was observed during the late-1980`s. A rapid review of the medical records for cases diagnosed between 1986 and 1990 failed to demonstrate that the thyroid cancer excess had resulted from enhanced detection. Surveillance activities subsequently undertaken to monitor the trend revealed that the excess persisted into 1993. A feasibility assessment of further studies was made, and ultimately, an investigation was conducted to document the epidemiologic characteristics of the excess in detail and to explore possible causes through a case-series records review. Findings from the investigation are the subject of this report.

  5. Treating both wastewater and excess sludge with an innovative process

    HE Sheng-bing; WANG Bao-zhen; WANG Lin; JIANG Yi-feng

    2003-01-01

    The innovative process consists of biological unit for wastewater treatment and ozonation unit for excess sludge treatment. An aerobic membrane bioreactor(MBR) was used to remove organics and nitrogen, and an anaerobic reactor was added to the biological unit for the release of phosphorus contained at aerobic sludge to enhance the removal of phosphorus. For the excess sludge produced in the MBR, which was fed to ozone contact column and reacted with ozone, then the ozonated sludge was returned to the MBR for further biological treatment. Experimental results showed that this process could remove organics, nitrogen and phosphorus efficiently, and the removals for COD, NH3-N, TN and TP were 93.17 %, 97.57 %, 82.77 % and 79.5 %, respectively. Batch test indicated that the specific nitrification rate and specific denitrification Under the test conditions, the sludge concentration in the MBR was kept at 5000-6000 mg/L, and the wasted sludge was ozonated at an ozone dosage of 0.10 kgO3/kgSS. During the experimental period of two months, no excess sludge was wasted, and a zero withdrawal of excess sludge was implemented. Through economic analysis, it was found that an additional ozonation operating cost for treatment of both wastewater and excess sludge was only 0.045 RMB Yuan(USD 0.0054)/m3 wastewater.

  6. Prenatal programming: adverse cardiac programming by gestational testosterone excess.

    Vyas, Arpita K; Hoang, Vanessa; Padmanabhan, Vasantha; Gilbreath, Ebony; Mietelka, Kristy A

    2016-01-01

    Adverse events during the prenatal and early postnatal period of life are associated with development of cardiovascular disease in adulthood. Prenatal exposure to excess testosterone (T) in sheep induces adverse reproductive and metabolic programming leading to polycystic ovarian syndrome, insulin resistance and hypertension in the female offspring. We hypothesized that prenatal T excess disrupts insulin signaling in the cardiac left ventricle leading to adverse cardiac programming. Left ventricular tissues were obtained from 2-year-old female sheep treated prenatally with T or oil (control) from days 30-90 of gestation. Molecular markers of insulin signaling and cardiac hypertrophy were analyzed. Prenatal T excess increased the gene expression of molecular markers involved in insulin signaling and those associated with cardiac hypertrophy and stress including insulin receptor substrate-1 (IRS-1), phosphatidyl inositol-3 kinase (PI3K), Mammalian target of rapamycin complex 1 (mTORC1), nuclear factor of activated T cells -c3 (NFATc3), and brain natriuretic peptide (BNP) compared to controls. Furthermore, prenatal T excess increased the phosphorylation of PI3K, AKT and mTOR. Myocardial disarray (multifocal) and increase in cardiomyocyte diameter was evident on histological investigation in T-treated females. These findings support adverse left ventricular remodeling by prenatal T excess. PMID:27328820

  7. Fast and Accurate Computation Tools for Gravitational Waveforms from Binary Sistems with any Orbital Eccentricity

    Pierro, V; Spallicci, A D; Laserra, E; Recano, F

    2001-01-01

    The relevance of orbital eccentricity in the detection of gravitational radiation from (steady state) binary stars is emphasized. Computationnally effective fast and accurate)tools for constructing gravitational wave templates from binary stars with any orbital eccentricity are introduced, including tight estimation criteria of the pertinent truncation and approximation errors.

  8. Battery Management Systems: Accurate State-of-Charge Indication for Battery-Powered Applications

    Pop, V.; Bergveld, H.J.; Danilov, D.; Regtien, P.P.L.; Notten, P.H.L.

    2008-01-01

    Battery Management Systems – Universal State-of-Charge indication for portable applications describes the field of State-of-Charge (SoC) indication for rechargeable batteries. With the emergence of battery-powered devices with an increasing number of power-hungry features, accurately estimating the

  9. Accurate mass error correction in liquid chromatography time-of-flight mass spectrometry based metabolomics

    Mihaleva, V.V.; Vorst, O.F.J.; Maliepaard, C.A.; Verhoeven, H.A.; Vos, de C.H.; Hall, R.D.; Ham, van R.C.H.J.

    2008-01-01

    Compound identification and annotation in (untargeted) metabolomics experiments based on accurate mass require the highest possible accuracy of the mass determination. Experimental LC/TOF-MS platforms equipped with a time-to-digital converter (TDC) give the best mass estimate for those mass signals

  10. Breaking worse: The emergence of krokodil and excessive injuries among people who inject drugs in Eurasia

    Grund, Jean-Paul; Latypov, Alisher; Harris, Magdalena

    2013-01-01

    textabstractBackground: Krokodil, a homemade injectable opioid, gained its moniker from the excessive harms asso- ciated with its use, such as ulcerations, amputations and discolored scale-like skin. While a relatively new phenomenon, krokodil use is prevalent in Russia and the Ukraine, with at least 100,000 and around 20,000 people respectively estimated to have injected the drug in 2011. In this paper we review the exist- ing information on the production and use of krokodil, within the con...

  11. Are National HFC Inventory Reports Accurate?

    Lunt, M. F.; Rigby, M. L.; Ganesan, A.; Manning, A.; O'Doherty, S.; Prinn, R. G.; Saito, T.; Harth, C. M.; Muhle, J.; Weiss, R. F.; Salameh, P.; Arnold, T.; Yokouchi, Y.; Krummel, P. B.; Steele, P.; Fraser, P. J.; Li, S.; Park, S.; Kim, J.; Reimann, S.; Vollmer, M. K.; Lunder, C. R.; Hermansen, O.; Schmidbauer, N.; Young, D.; Simmonds, P. G.

    2014-12-01

    Hydrofluorocarbons (HFCs) were introduced as replacements for ozone depleting chlorinated gases due to their negligible ozone depletion potential. As a result, these potent greenhouse gases are now rapidly increasing in atmospheric mole fraction. However, at present, less than 50% of HFC emissions, as inferred from models combined with atmospheric measurements (top-down methods), can be accounted for by the annual national reports to the United Nations Framework Convention on Climate Change (UNFCCC). There are at least two possible reasons for the discrepancy. Firstly, significant emissions could be originating from countries not required to report to the UNFCCC ("non-Annex 1" countries). Secondly, emissions reports themselves may be subject to inaccuracies. For example the HFC emission factors used in the 'bottom-up' calculation of emissions tend to be technology-specific (refrigeration, air conditioning etc.), but not tuned to the properties of individual HFCs. To provide a new top-down perspective, we inferred emissions using high frequency HFC measurements from the Advanced Global Atmospheric Gases Experiment (AGAGE) and the National Institute for Environmental Studies (NIES) networks. Global and regional emissions information was inferred from these measurements using a coupled Eulerian and Lagrangian system, based on NCAR's MOZART model and the UK Met Office NAME model. Uncertainties in this measurement and modelling framework were investigated using a hierarchical Bayesian inverse method. Global and regional emissions estimates for five of the major HFCs (HFC-134a, HFC-125, HFC-143a, HFC-32, HFC-152a) from 2004-2012 are presented. It was found that, when aggregated, the top-down estimates from Annex 1 countries agreed remarkably well with the reported emissions, suggesting the non-Annex 1 emissions make up the difference with the top-down global estimate. However, when these HFC species are viewed individually we find that emissions of HFC-134a are over

  12. Towards a more accurate concept of fuels

    Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored

  13. Accurate orbit propagation with planetary close encounters

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  14. Towards Accurate Application Characterization for Exascale (APEX)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  15. Fast, accurate standardless XRF analysis with IQ+

    Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc

  16. Permanent demand excess as business strategy: an analysis of the Brazilian higher-education market

    Rodrigo Menon Simões Moita

    2015-03-01

    Full Text Available Many Higher Education Institutions (HEIs establish tuition below the equilibrium price to generate permanent demand excess. This paper first adapts Becker’s (1991 theory to understand why the HEIs price in this way. The fact that students are both consumers and inputs on the education production process gives rise to a market equilibrium where some firms have excess demand and charge high prices, and others charge low prices and have empty seats.Second, the paper analyzes this equilibrium empirically. We estimated the demand for undergraduate courses in Business Administration in the State of São Paulo. The results show that tuition, quality of incoming students and percentage of lecturers holding doctorates degrees are the determining factors of students’ choice. Since the student quality determines the demand for a HEI, it is calculated what the value is for a HEI to get better students; that is the total revenue that each HEI gives up to guarantee excess demand. Regarding the “investment” in selectivity, 39 HEIs in São Paulo give up a combined R$ 5 million (or US$ 3.14 million in revenue per year per freshman class, which means 7.6% of the revenue coming from a freshman class.

  17. Characterizing the stellar photospheres and near-infrared excesses in accreting T Tauri systems

    McClure, M K; Espaillat, C; Hartmann, L; Hernandez, J; Ingleby, L; Luhman, K L; D'Alessio, P; Sargent, B

    2013-01-01

    Using NASA IRTF SpeX data from 0.8 to 4.5 $\\mu$m, we determine self-consistently the stellar properties and excess emission above the photosphere for a sample of classical T Tauri stars (CTTS) in the Taurus molecular cloud with varying degrees of accretion. This process uses a combination of techniques from the recent literature as well as observations of weak-line T Tauri stars (WTTS) to account for the differences in surface gravity and chromospheric activity between the TTS and dwarfs, which are typically used as photospheric templates for CTTS. Our improved veiling and extinction estimates for our targets allow us to extract flux-calibrated spectra of the excess in the near-infrared. We find that we are able to produce an acceptable parametric fit to the near-infrared excesses using a combination of up to three blackbodies. In half of our sample, two blackbodies at temperatures of 8000 K and 1600 K suffice. These temperatures and the corresponding solid angles are consistent with emission from the accreti...

  18. Determination of ethyl glucuronide in hair to assess excessive alcohol consumption in a student population.

    Oppolzer, David; Barroso, Mário; Gallardo, Eugenia

    2016-03-01

    Hair analysis for ethyl glucuronide (EtG) was used to evaluate the pattern of alcohol consumption amongst the Portuguese university student population. A total of 975 samples were analysed. For data interpretation, the 2014 guidelines from the Society of Hair Testing (SoHT) for the use of alcohol markers in hair for the assessment of both abstinence and chronic excessive alcohol consumption were considered. EtG concentrations were significantly higher in the male population. The effect of hair products and cosmetics was evaluated by analysis of variance (ANOVA), and significant lower concentrations were obtained when conditioner or hair mask was used or when hair was dyed. Based on the analytical data and information obtained in the questionnaires from the participants, receiver operating characteristic (ROC) curves were constructed in order to determine the ideal cut-offs for our study population. Optimal cut-off values were estimated at 7.3 pg/mg for abstinence or rare occasional drinking control and 29.8 pg/mg for excessive consumption. These values are very close to the values suggested by the SoHT, proving their adequacy to the studied population. Overall, the obtained EtG concentrations demonstrate that participants are usually well aware of their consumption pattern, correlating with the self-reported consumed alcohol quantity, consumption habits and excessive consumption close to the time of hair sampling. PMID:26537927

  19. Financial Instability - a Result of Excess Liquidity or Credit Cycles?

    Heebøll-Christensen, Christian

    financial crisis. Consistent with monetarist theory, the results suggest a stable money supply-demand relation in the period in question. However, the implied excess liquidity only resulted in financial destabilizing effect after year 2000. Meanwhile, the results also point to persistent cycles of real...... house prices and leverage, which appear to have been driven by real credit shocks, in accordance with post-Keynesian theories on financial instability. Importantly, however, these mechanisms of credit growth and excess liquidity are found to be closely related. In regards to the global financial crisis......, a prolonged credit cycle starting in the mid-1990s - and possibly initiated subprime mortgage innovations - appears to have created a long-run housing bubble. Further fuelled by expansionary monetary policy and excess liquidity, the bubble accelerated in period following the dot-com crash, until it...

  20. The Presence of Excess Oxygen in Burning Metallic Materials

    Wilson, D. Bruce; Steinberg, Theodore A.; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)

    2000-01-01

    Early work on burning of iron rods under conditions of the ATSM/NASA flammability test showed that there was excess oxygen, that is, above stoichiometric requirements for iron(III) oxide, present in the molten product during burning. Since that work, this phenomenon has been confirmed for burning under microgravity conditions and has been observed for a wide range of metals under burning conditions of a single micro-drop at ambient pressures and 20-second microgravity tests under pressurized oxygen-enriched conditions. This paper reviews these experimental observations and discusses the possible thermodynamic analysis for the metals iron, aluminum, and cobalt. The excess oxygen in the burning molten iron oxide was represented as combined to form a series of ferrite ions. For aluminum the excess oxygen is represented as a bridging species and a similar explanation is postulated for the cobalt system.

  1. Testing ATLAS diboson excess with dark matter searches at LHC

    The ATLAS collaboration has recently reported a 2.6σ excess in the search for a heavy resonance decaying into a pair of weak gauge bosons. Only fully hadronic final states are being looked for in the analysis. If the observed excess really originates from the gauge bosons' decays, other decay modes of the gauge bosons would inevitably leave a trace on other exotic searches. In this paper, we propose the use of the Z boson into a pair of neutrinos to test the excess. This decay leads to a very large missing energy and can be probed with conventional dark matter searches at the LHC. We discuss the current constraints from the dark matter searches and the prospects. We find that optimizing these searches may give a very robust probe of the resonance, even with the currently available data of the 8 TeV LHC.

  2. Spitzer Surveys of IR Excesses of White Dwarfs

    Chu, Y -H; Bilíkovà, J; Riddle, A; Su, K Y -L

    2010-01-01

    IR excesses of white dwarfs (WDs) can be used to diagnose the presence of low-mass companions, planets, and circumstellar dust. Using different combinations of wavelengths and WD temperatures, circumstellar dust at different radial distances can be surveyed. The Spitzer Space Telescope has been used to search for IR excesses of white dwarfs. Two types of circumstellar dust disks have been found: (1) small disks around cool WDs with Teff 100,000 K. The small dust disks are within the Roche limit, and are commonly accepted to have originated from tidally crushed asteroids. The large dust disks, at tens of AU from the central WDs, have been suggested to be produced by increased collisions among Kuiper Belt-like objects. In this paper, we discuss Spitzer IRAC surveys of small dust disks around cool WDs, a MIPS survey of large dust disks around hot WDs, and an archival Spitzer survey of IR excesses of WDs.

  3. Spitzer Surveys of Infrared Excesses of White Dwarfs

    Chu, Y -H; Bilíkovà, J; Riddle, A; Su, K Y -L

    2010-01-01

    IR excesses of white dwarfs (WDs) can be used to diagnose the presence of low-mass companions, planets, and circumstellar dust. Using different combinations of wavelengths and WD temperatures, circumstellar dust at different radial distances can be surveyed. The Spitzer Space Telescope has been used to search for IR excesses of white dwarfs. Two types of circumstellar dust disks have been found: (1) small disks around cool WDs with T_eff 100,000 K. The small dust disks are within the Roche limit, and are commonly accepted to have originated from tidally crushed asteroids. The large dust disks, at tens of AU from the central WDs, have been suggested to be produced by increased collisions among Kuiper Belt-like objects. In this paper, we discuss Spitzer IRAC surveys of small dust disks around cool WDs, a MIPS survey of large dust disks around hot WDs, and an archival Spitzer survey of IR excesses of WDs.

  4. Internet Addiction and Excessive Social Networks Use: What About Facebook?

    Guedes, Eduardo; Sancassiani, Federica; Carta, Mauro Giovani; Campos, Carlos; Machado, Sergio; King, Anna Lucia Spear; Nardi, Antonio Egidio

    2016-01-01

    Facebook is notably the most widely known and used social network worldwide. It has been described as a valuable tool for leisure and communication between people all over the world. However, healthy and conscience Facebook use is contrasted by excessive use and lack of control, creating an addiction with severely impacts the everyday life of many users, mainly youths. If Facebook use seems to be related to the need to belong, affiliate with others and for self-presentation, the beginning of excessive Facebook use and addiction could be associated to reward and gratification mechanisms as well as some personality traits. Studies from several countries indicate different Facebook addiction prevalence rates, mainly due to the use of a wide-range of evaluation instruments and to the lack of a clear and valid definition of this construct. Further investigations are needed to establish if excessive Facebook use can be considered as a specific online addiction disorder or an Internet addiction subtype. PMID:27418940

  5. The 750 GeV diphoton excess and SUSY

    Heinemeyer, S

    2016-01-01

    The LHC experiments ATLAS and CMS have reported an excess in the diphoton spectrum at \\sim 750 GeV. At the same time the motivation for Supersymmetry (SUSY) remains unbowed. Consequently, we review briefly the proposals to explain this excess in SUSY, focusing on "pure" (N)MSSM solutions. We then review in more detail a proposal to realize this excess within the NMSSM. In this particular scenario a Higgs boson with mass around 750 GeV decays to two light pseudo-scalar Higgs bosons. Via mixing with the pion these pseudo-scalars decay into a pair of highly collimated photons, which are identified as one photon, thus resulting in the observed signal.

  6. Assessment of excess lung cancer risk due to the indoor radon exposure in some cities of Jordan

    Indoor radon concentration levels were measured in the cities of Amman, Zarka, and Sault using Cr-39 etch track detectors. The average values of indoor concentration levels in these cities were found to be 41.3 Bq m/sup -3/, 30.7 Bq m/sup -3/ and 51.2 Bq m/sup -3/ respectively. Based on these data, excess lung cancer risk was assessed using the risk factors recommended by UNSCEAR. The life time excess lung cancer risk due to the life time exposure has been determined and is found to vary from 33 per MPY to 55 MPY at the age of 70 when lower value of the risk factors is used. On the other hand, if upper value of the risk factors is used then the estimated excess lung cancer risk at the age of 70 varies from 74 per MPY to 125 per MPY. (author)

  7. Excessive recreational computer use and food consumption behaviour among adolescents

    Mao Yuping

    2010-08-01

    Full Text Available Abstract Introduction Using the 2005 California Health Interview Survey (CHIS data, we explore the association between excessive recreational computer use and specific food consumption behavior among California's adolescents aged 12-17. Method The adolescent component of CHIS 2005 measured the respondents' average number of hours spent on viewing TV on a weekday, the average number of hours spent on viewing TV on a weekend day, the average number of hours spent on playing with a computer on a weekday, and the average number of hours spent on playing with computers on a weekend day. We recode these four continuous variables into four variables of "excessive media use," and define more than three hours of using a medium per day as "excessive." These four variables are then used in logistic regressions to predict different food consumption behaviors on the previous day: having fast food, eating sugary food more than once, drinking sugary drinks more than once, and eating more than five servings of fruits and vegetables. We use the following variables as covariates in the logistic regressions: age, gender, race/ethnicity, parental education, household poverty status, whether born in the U.S., and whether living with two parents. Results Having fast food on the previous day is associated with excessive weekday TV viewing (O.R. = 1.38, p Conclusion Excessive recreational computer use independently predicts undesirable eating behaviors that could lead to overweight and obesity. Preventive measures ranging from parental/youth counseling to content regulations might be addressing the potential undesirable influence from excessive computer use on eating behaviors among children and adolescents.

  8. Origin of the Lyman excess in early-type stars

    Cesaroni, R.; Sánchez-Monge, Á.; Beltrán, M. T.; Molinari, S.; Olmi, L.; Treviño-Morales, S. P.

    2016-04-01

    Context. Ionized regions around early-type stars are believed to be well-known objects, but until recently, our knowledge of the relation between the free-free radio emission and the IR emission has been observationally hindered by the limited angular resolution in the far-IR. The advent of Herschel has now made it possible to obtain a more precise comparison between the two regimes, and it has been found that about a third of the young H ii regions emit more Lyman continuum photons than expected, thus presenting a Lyman excess. Aims: With the present study we wish to distinguish between two scenarios that have been proposed to explain the existence of the Lyman excess: (i) underestimation of the bolometric luminosity, or (ii) additional emission of Lyman-continuum photons from an accretion shock. Methods: We observed an outflow (SiO) and an infall (HCO+) tracer toward a complete sample of 200 H ii regions, 67 of which present the Lyman excess. Our goal was to search for any systematic difference between sources with Lyman excess and those without. Results: While the outflow tracer does not reveal any significant difference between the two subsamples of H ii regions, the infall tracer indicates that the Lyman-excess sources are more associated with infall signposts than the other objects. Conclusions: Our findings indicate that the most plausible explanation for the Lyman excess is that in addition to the Lyman continuum emission from the early-type star, UV photons are emitted from accretion shocks in the stellar neighborhood. This result suggests that high-mass stars and/or stellar clusters containing young massive stars may continue to accrete for a long time, even after the development of a compact H ii region. Based on observations carried out with the IRAM 30 m Telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).

  9. Fetal Programming of Obesity: Maternal Obesity and Excessive Weight Gain

    Seray Kabaran

    2014-10-01

    Full Text Available The prevalence of obesity is an increasing health problem throughout the world. Maternal pre-pregnancy weight, maternal nutrition and maternal weight gain are among the factors that can cause childhood obesity. Both maternal obesity and excessive weight gain increase the risks of excessive fetal weight gain and high birth weight. Rapid weight gain during fetal period leads to changes in the newborn body composition. Specifically, the increase in body fat ratio in the early periods is associated with an increased risk of obesity in the later periods. It was reported that over-nutrition during fetal period could cause excessive food intake during postpartum period as a result of metabolic programming. By influencing the fetal metabolism and tissue development, maternal obesity and excessive weight gain change the amounts of nutrients and metabolites that pass to the fetus, thus causing excessive fetal weight gain which in turn increases the risk of obesity. Fetal over-nutrition and excessive weight gain cause permanent metabolic and physiologic changes in developing organs. While mechanisms that affect these organs are not fully understood, it is thought that the changes may occur as a result of the changes in fetal energy metabolism, appetite control, neuroendocrine functions, adipose tissue mass, epigenetic mechanisms and gene expression. In this review article, the effects of maternal body weight and weight gain on fetal development, newborn birth weight and risk of obesity were evaluated, and additionally potential mechanisms that can explain the effects of fetal over-nutrition on the risk of obesity were investigated [TAF Prev Med Bull 2014; 13(5.000: 427-434

  10. Millimeter and submillimeter excess emission in M 33 revealed by Planck and LABOCA

    Hermelo, I.; Relaño, M.; Lisenfeld, U.; Verley, S.; Kramer, C.; Ruiz-Lara, T.; Boquien, M.; Xilouris, E. M.; Albrecht, M.

    2016-05-01

    Context. Previous studies have shown the existence of an excess of emission at submillimeter (submm) and millimeter (mm) wavelengths in the spectral energy distribution (SED) of many low-metallicity galaxies. The so-called "submm excess", whose origin remains unknown, challenges our understanding of the dust properties in low-metallicity environments. Aims: The goal of the present study is to model separately the emission from the star forming (SF) component and the emission from the diffuse interstellar medium (ISM) in the nearby spiral galaxy M 33 in order to determine whether both components can be well fitted using radiation transfer models or whether there is an excess of submm emission associated with one or both of them. Methods: We decomposed the observed SED of M 33 into its SF and diffuse components. Mid-infrared (MIR) and far-infrared (FIR) fluxes were extracted from Spitzer and Herschel data. At submm and mm wavelengths, we used ground-based observations from APEX to measure the emission from the SF component and data from the Planck space telescope to estimate the diffuse emission. Both components were separately fitted using radiation transfer models based on standard dust properties (i.e., emissivity index β = 2) and a realistic geometry. The large number of previous studies helped us to estimate the thermal radio emission and to constrain an important part of the input parameters of the models. Both modeled SEDs were combined to build the global SED of M 33. In addition, the radiation field necessary to power the dust emission in our modeling was compared with observations from GALEX, Sloan, and Spitzer. Results: Our modeling is able to reproduce the observations at MIR and FIR wavelengths, but we found a strong excess of emission at submm and mm wavelengths where the model expectations severely underestimate the LABOCA and Planck fluxes. We also found that the ultraviolet (UV) radiation escaping the galaxy is 70% higher than the model predictions

  11. ATLAS on-Z Excess Through Vector-Like Quarks

    Endo, Motoi

    2016-01-01

    We investigate the possibility that the excess observed in the leptonic-$Z +$jets $+\\slashed{E}_T$ ATLAS SUSY search is due to pair productions of a vector-like quark $U$ decaying to the first-generation quarks and $Z$ boson. We find that the excess can be explained within the 2$\\sigma$ (up to 1.4$\\sigma$) level while evading the constraints from the other LHC searches. The preferred range of the mass and branching ratio are $610 0.3$-$0.45$, respectively.

  12. Prevalence of excessive screen time and associated factors in adolescents

    Joana Marcela Sales de Lucena; Luanna Alexandra Cheng; Thaísa Leite Mafaldo Cavalcante; Vanessa Araújo da Silva; José Cazuza de Farias Júnior

    2015-01-01

    Objective: To determine the prevalence of excessive screen time and to analyze associated factors among adolescents. Methods: This was a cross-sectional school-based epidemiological study with 2874 high school adolescents with age 14-19 years (57.8% female) from public and private schools in the city of João Pessoa, PB, Northeast Brazil. Excessive screen time was defined as watching television and playing video games or using the computer for more than 2 h/day. The associated factors analyze...

  13. Interpreting 750 GeV diphoton excess in plain NMSSM

    Badziak, Marcin; Olechowski, Marek; Pokorski, Stefan; Sakurai, Kazuki

    2016-09-01

    NMSSM has enough ingredients to explain the diphoton excess at 750 GeV: singlet-like (pseudo) scalar (a) s and higgsinos as heavy vector-like fermions. We consider the production of the 750 GeV singlet-like pseudo scalar a from a decay of the doublet-like pseudo scalar A, and the subsequent decay of a into two photons via higgsino loop. We demonstrate that this cascade decay of the NMSSM Higgs bosons can explain the diphoton excess at 750 GeV.

  14. Specification tests of asset pricing models using excess returns

    Kan, Raymond; Robotti, Cesare

    2006-01-01

    We discuss the impact of different formulations of asset pricing models on the outcome of specification tests that are performed using excess returns. It is generally believed that when only excess returns are used for testing asset pricing models, the mean of the stochastic discount factor (SDF) does not matter. We show that the mean of the candidate SDF is only irrelevant when the model is correct. When the model is misspecified, the mean of the SDF can be a very important determinant of th...

  15. Cholesterol homeostasis: How do cells sense sterol excess?

    Howe, Vicky; Sharpe, Laura J; Alexopoulos, Stephanie J; Kunze, Sarah V; Chua, Ngee Kiat; Li, Dianfan; Brown, Andrew J

    2016-09-01

    Cholesterol is vital in mammals, but toxic in excess. Consequently, elaborate molecular mechanisms have evolved to maintain this sterol within narrow limits. How cells sense excess cholesterol is an intriguing area of research. Cells sense cholesterol, and other related sterols such as oxysterols or cholesterol synthesis intermediates, and respond to changing levels through several elegant mechanisms of feedback regulation. Cholesterol sensing involves both direct binding of sterols to the homeostatic machinery located in the endoplasmic reticulum (ER), and indirect effects elicited by sterol-dependent alteration of the physical properties of membranes. Here, we examine the mechanisms employed by cells to maintain cholesterol homeostasis. PMID:26993747

  16. The relation between excess control and cost of capital

    Yves Bozec; Claude Laurin; Iwan Meier

    2014-01-01

    Purpose – The purpose of this study is to investigate the relationship between dominant shareholders, whose voting rights exceed cash flow rights (excess control), and firms’ cost of capital, including both equity capital and debt. Design/methodology/approach – This research is conducted in Canada over a four-year period from 2002 to 2005 and uses panel data of 155 S&P/TSX firms. The weighted average cost of capital is regressed on excess control using fixed-effect regressions in a two-stage ...

  17. Enhancing excess sludge aerobic digestion with low intensity ultrasound

    DING Wen-chuan; LI Dong-xue; ZENG Xiao-lan; LONG Teng-rui

    2006-01-01

    In order to enhance the efficiency of aerobic digestion, the excess sludge was irradiated by low intensity ultrasound at a frequency of 28 kHz and acoustic intensity of 0.53 W/cm2. The results show that the sludge stabilization without ultrasonic treatment can be achieved after 17 d of digestion, whereas the digestion time of ultrasonic groups can be cut by 3 - 7 d. During the same digestion elapsing, in ultrasonic groups the total volatile suspended solid removal rate is higher than that in the control group. The kinetics of aerobic digestion of excess sludge with ultrasound can also be described with first-order reaction.

  18. Experimental and theoretical excess molar enthalpies of ternary and binary mixtures containing 2-Methoxy-2-Methylpropane, 1-propanol, heptane

    Highlights: • Experimental enthalpies for the ternary system MTBE + propanol + heptane were measured. • No experimental ternary values were found in the currently available literature. • Experimental enthalpies for the binary system propanol + heptane were measured. • Excess molar enthalpies are positive over the whole range of composition. • The ternary contribution is also positive, and the representation is asymmetric. -- Abstract: Excess molar enthalpies, at the temperature of 298.15 K and atmospheric pressure, have been measured for the ternary system {x1 2-Methoxy-2-Methylpropane (MTBE) + x2 1-propanol + (1 − x1 − x2) heptane}, over the whole composition range. Also, experimental data of excess molar enthalpy for the involved binary mixture {x 1-propanol + (1 − x) heptane} at the 298.15 K and atmospheric pressure, are reported. We are not aware of any previous experimental measurement of excess enthalpy in the literature for the ternary system presented in this study. Values of the excess molar enthalpies were measured using a Calvet microcalorimeter. The ternary contribution to the excess enthalpy was correlated with the equation due to Morris et al. (1975) [15], and the equation proposed by Myers–Scott (1963) [14] was used to fitted the experimental binary mixture measured in this work. Additionally, the experimental results are compared with the estimations obtained by applying the group contribution model of UNIFAC, in the versions of Larsen et al. (1987) [16] and Gmehling et al. (1993) [17]. Several empirical expressions for estimating ternary properties from binary results were also tested

  19. Distributed Formation State Estimation Algorithms Under Resource and Multi-Tasking Constraints Project

    National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...

  20. Excess cardiovascular mortality associated with cold spells in the Czech Republic

    Kyncl Jan

    2009-01-01

    Full Text Available Abstract Background The association between cardiovascular mortality and winter cold spells was evaluated in the population of the Czech Republic over 21-yr period 1986–2006. No comprehensive study on cold-related mortality in central Europe has been carried out despite the fact that cold air invasions are more frequent and severe in this region than in western and southern Europe. Methods Cold spells were defined as periods of days on which air temperature does not exceed -3.5°C. Days on which mortality was affected by epidemics of influenza/acute respiratory infections were identified and omitted from the analysis. Excess cardiovascular mortality was determined after the long-term changes and the seasonal cycle in mortality had been removed. Excess mortality during and after cold spells was examined in individual age groups and genders. Results Cold spells were associated with positive mean excess cardiovascular mortality in all age groups (25–59, 60–69, 70–79 and 80+ years and in both men and women. The relative mortality effects were most pronounced and most direct in middle-aged men (25–59 years, which contrasts with majority of studies on cold-related mortality in other regions. The estimated excess mortality during the severe cold spells in January 1987 (+274 cardiovascular deaths is comparable to that attributed to the most severe heat wave in this region in 1994. Conclusion The results show that cold stress has a considerable impact on mortality in central Europe, representing a public health threat of an importance similar to heat waves. The elevated mortality risks in men aged 25–59 years may be related to occupational exposure of large numbers of men working outdoors in winter. Early warnings and preventive measures based on weather forecast and targeted on the susceptible parts of the population may help mitigate the effects of cold spells and save lives.

  1. A 45-year-old man with excessive daytime somnolence, and witnessed apnea at altitude

    Welsh CH

    2011-04-01

    Full Text Available A sleepy man without sleep apnea at 1609m (5280 feet had disturbed sleep at his home altitude of 3200m (10500 feet. In addition to common disruptors of sleep such as psychophysiologic insomnia, restless leg syndrome, alcohol and excessive caffeine use, central sleep apnea with periodic breathing can be a significant cause of disturbed sleep at altitude. In symptomatic patients living at altitude, a sleep study at their home altitude should be considered to accurately diagnose the presence and magnitude of sleep disordered breathing as sleep studies performed at lower altitudes may miss this diagnosis. Treatments options differ from those to treat obstructive apnea. Supplemental oxygen is considered by many to be first-line therapy.

  2. Excess molar volumes for CO{sub 2}-CH{sub 4}-N{sub 2} mixtures

    Seitz, J.C. [Oak Ridge National Lab., TN (United States)]|[Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences; Blencoe, J.G.; Joyce, D.B. [Oak Ridge National Lab., TN (United States); Bodnar, R.J. [Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Geological Sciences

    1992-04-01

    Vibrating-tube densimetry experiments are being performed to determine the excess molar volumes of single-phase CO{sub 2}-CH{sub 4}-N{sub 2} gas mixtures at pressures as high as 3500 bars and temperatures up to 500{degrees}C. In our initial experiments, we determined the P-V-T properties of: (1) CO{sub 2}-CH{sub 4}, CO{sub 2}N{sub 2}, CH{sub 4}-N{sub 2}, and CO{sub 2}-CH{sub 4}-N{sub 2} mixtures at 1000 bars. 50 {degrees}C: and (2) CO{sub 2}-CH{sub 4} mixtures from 100 to 1000 bars at 100{degrees}C. Excess molar volumes in the binary subsystems are very accurately represented by two-parameter Margules equations. Experimentally determined excess molar volumes are in fair to poor agreement with predictions from published equations of state. Geometric projection techniques based on binary system data yield from published equations of state. Geometric projection techniques based on binary system data yield calculated excess molar volume for CO{sub 2}-CH{sub 4}-N{sub 2} mixtures that are in good agreement with our experimental data. 7 refs., 8 figs.

  3. Determination of accurate electron chiral asymmetries in fenchone and camphor in the VUV range: sensitivity to isomerism and enantiomeric purity.

    Nahon, Laurent; Nag, Lipsa; Garcia, Gustavo A; Myrgorodska, Iuliia; Meierhenrich, Uwe; Beaulieu, Samuel; Wanie, Vincent; Blanchet, Valérie; Géneaux, Romain; Powis, Ivan

    2016-05-14

    Photoelectron circular dichroism (PECD) manifests itself as an intense forward/backward asymmetry in the angular distribution of photoelectrons produced from randomly-oriented enantiomers by photoionization with circularly-polarized light (CPL). As a sensitive probe of both photoionization dynamics and of the chiral molecular potential, PECD attracts much interest especially with the recent performance of related experiments with visible and VUV laser sources. Here we report, by use of quasi-perfect CPL VUV synchrotron radiation and using a double imaging photoelectron/photoion coincidence (i(2)PEPICO) spectrometer, new and very accurate values of the corresponding asymmetries on showcase chiral isomers: camphor and fenchone. These data have additionally been normalized to the absolute enantiopurity of the sample as measured by a chromatographic technique. They can therefore be used as benchmarking data for new PECD experiments, as well as for theoretical models. In particular we found, especially for the outermost orbital of both molecules, a good agreement with CMS-Xα PECD modeling over the whole VUV range. We also report a spectacular sensitivity of PECD to isomerism for slow electrons, showing large and opposite asymmetries when comparing R-camphor to R-fenchone (respectively -10% and +16% around 10 eV). In the course of this study, we could also assess the analytical potential of PECD. Indeed, the accuracy of the data we provide are such that limited departure from perfect enantiopurity in the sample we purchased could be detected and estimated in excellent agreement with the analysis performed in parallel via a chromatographic technique, establishing a new standard of accuracy, in the ±1% range, for enantiomeric excess measurement via PECD. The i(2)PEPICO technique allows correlating PECD measurements to specific parent ion masses, which would allow its application to analysis of complex mixtures. PMID:27095534

  4. 78 FR 73817 - Information Collection; Federal Excess Personal Property (FEPP) and Firefighter Property (FFP...

    2013-12-09

    ... Forest Service Information Collection; Federal Excess Personal Property (FEPP) and Firefighter Property... currently approved information collection, Federal Excess Personal Property (FEPP) and Firefighter Property... Friday. SUPPLEMENTARY INFORMATION: Title: Federal Excess Personal Property (FEPP) and...

  5. Temperature of maximum density and excess thermodynamics of aqueous mixtures of methanol.

    González-Salgado, D; Zemánková, K; Noya, E G; Lomba, E

    2016-05-14

    In this work, we present a study of representative excess thermodynamic properties of aqueous mixtures of methanol over the complete concentration range, based on extensive computer simulation calculations. In addition to test various existing united atom model potentials, we have developed a new force-field which accurately reproduces the excess thermodynamics of this system. Moreover, we have paid particular attention to the behavior of the temperature of maximum density (TMD) in dilute methanol mixtures. The presence of a temperature of maximum density is one of the essential anomalies exhibited by water. This anomalous behavior is modified in a non-monotonous fashion by the presence of fully miscible solutes that partly disrupt the hydrogen bond network of water, such as methanol (and other short chain alcohols). In order to obtain a better insight into the phenomenology of the changes in the TMD of water induced by small amounts of methanol, we have performed a new series of experimental measurements and computer simulations using various force fields. We observe that none of the force-fields tested capture the non-monotonous concentration dependence of the TMD for highly diluted methanol solutions. PMID:27179493

  6. Temperature of maximum density and excess thermodynamics of aqueous mixtures of methanol

    González-Salgado, D.; Zemánková, K.; Noya, E. G.; Lomba, E.

    2016-05-01

    In this work, we present a study of representative excess thermodynamic properties of aqueous mixtures of methanol over the complete concentration range, based on extensive computer simulation calculations. In addition to test various existing united atom model potentials, we have developed a new force-field which accurately reproduces the excess thermodynamics of this system. Moreover, we have paid particular attention to the behavior of the temperature of maximum density (TMD) in dilute methanol mixtures. The presence of a temperature of maximum density is one of the essential anomalies exhibited by water. This anomalous behavior is modified in a non-monotonous fashion by the presence of fully miscible solutes that partly disrupt the hydrogen bond network of water, such as methanol (and other short chain alcohols). In order to obtain a better insight into the phenomenology of the changes in the TMD of water induced by small amounts of methanol, we have performed a new series of experimental measurements and computer simulations using various force fields. We observe that none of the force-fields tested capture the non-monotonous concentration dependence of the TMD for highly diluted methanol solutions.

  7. A Multi-Scale Approach to Directional Field Estimation

    Bazen, Asker M.; Bouman, Niek J.; Veldhuis, Raymond N. J.

    2004-01-01

    This paper proposes a robust method for directional field estimation from fingerprint images that combines estimates at multiple scales. The method is able to provide accurate estimates in scratchy regions, while at the same time maintaining correct estimates around singular points. Compared to other methods, the penalty for detecting false singular points is much smaller, because this does not deteriorate the directional field estimate.

  8. Advances in Derivative-Free State Estimation for Nonlinear Systems

    Nørgaard, Magnus; Poulsen, Niels Kjølstad; Ravn, Ole

    In this paper we show that it involves considerable advantages to use polynomial approximations obtained with an interpolation formula for derivation of state estimators for nonlinear systems. The estimators become more accurate than estimators based on Taylor approximations, and yet the implemen...... estimators the paper also unifies recent developments in derivative-free state estimation....

  9. Estimating Influenza Hospitalizations among Children

    Grijalva, Carlos G.; Craig, Allen S.; DUPONT, William D.; Bridges, Carolyn B.; Schrag, Stephanie J.; Iwane, Marika K.; Schaffner, William; Edwards, Kathryn M.; Griffin, Marie R.

    2006-01-01

    Although influenza causes more hospitalizations and deaths among American children than any other vaccine-preventable disease, deriving accurate population-based estimates of disease impact is challenging. Using 2 independent surveillance systems, we performed a capture-recapture analysis to estimate influenza-associated hospitalizations in children in Davidson County, Tennessee, during the 2003–2004 influenza season. The New Vaccine Surveillance Network (NVSN) enrolled children hospitalized ...

  10. LHC diboson excesses as an evidence for a heavy WW resonance

    Arbuzov, B A

    2015-01-01

    Recently reported diboson excesses at LHC are interpreted to be connected with heavy $WW$ resonance with weak isotopic spin 2. The resonance appears due to the wouldbe anomalous triple interaction of the weak bosons, which is defined by well-known coupling constant $\\lambda$. We obtain estimates for the effect, which qualitatively agree with ATLAS data. Effects are predicted in inclusive production of $W^+W^+, W^+ (Z,\\gamma), W^+ W^-, (Z,\\gamma) (Z,\\gamma), W^- (Z,\\gamma), W^-W^-$ resonances with $M_R \\simeq 2\\,TeV$, which could be reliably checked at the upgraded LHC with $\\sqrt{s}\\,=\\,13 TeV$. In the framework of an approach to the spontaneous generation of of the triple anomalous interaction its coupling constant is estimated to be $\\lambda = -\\,0.017\\pm 0.005$ in an agreement with existing restrictions.

  11. Solubility of chiral species as function of the enantiomeric excess.

    Coquerel, Gerard

    2015-06-01

    The solubility of racemizable and nonracemizable chiral species is discussed in terms of: enantiomeric excess, nature of solvent and the solid phases, which are in equilibrium with a saturated solution. Stable and metastable equilibria are contemplated by an extensive used of phase diagrams. PMID:25918978

  12. An extended rational thermodynamics model for surface excess fluxes

    Sagis, L.M.C.

    2012-01-01

    In this paper, we derive constitutive equations for the surface excess fluxes in multiphase systems, in the context of an extended rational thermodynamics formalism. This formalism allows us to derive Maxwell–Cattaneo type constitutive laws for the surface extra stress tensor, the surface thermal en

  13. Gamma-ray excess and the minimal dark matter model

    Duerr, Michael [DESY Hamburg (Germany); Fileviez Perez, Pavel; Smirnov, Juri [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany). Particle and Astroparticle Physics Div.

    2015-10-15

    We point out that the gamma-ray excesses in the galactic center and in the dwarf galaxy Reticulum II can both be well explained within the simplest dark matter model. We find that the corresponding region of parameter space will be tested by direct and indirect dark matter searches in the near future.

  14. Understanding Excessive School Absenteeism as School Refusal Behavior

    Dube, Shanta R.; Orpinas, Pamela

    2009-01-01

    Understanding excessive absenteeism is important to ameliorating the negative outcomes associated with the behavior. The present study examined behavioral reinforcement profiles of school refusal behavior: negative reinforcement (avoidance) and positive reinforcement (gaining parental attention or receiving tangible benefits from not attending…

  15. Effect of Excess Gravitational Force on Cultured Myotubes in Vitro

    Shigehiro Hashimoto

    2013-06-01

    Full Text Available An effect of an excess gravitational force on cultured myoblasts has been studied in an experimental system with centrifugal force in vitro. Mouse myoblasts (C2C12 were seeded on a culture dish of 35 mm diameter, and cultured in the Dulbecco's Modified Eagle's Medium until the sub-confluent condition. To apply the excess gravitational force on the cultured cells, the dish was set in a conventional centrifugal machine. Constant gravitational force was applied to the cultured cells for three hours. Variations were made on the gravitational force (6 G, 10 G, 100 G, 500 G, and 800 G with control of the rotational speed of the rotator in the centrifugal machine. Morphology of the cells was observed with a phasecontrast microscope for eight days. The experimental results show that the myotube thickens day by day after the exposure to the excess gravitational force field. The results also show that the higher excess gravitational force thickens myotubes. The microscopic study shows that myotubes thicken with fusion each other.

  16. Excess post hypoxic oxygen consumption in Atlantic cod (Gadus morhua)

    Plambech, M.; Deurs, Mikael van; Steffensen, J.F.; Tirsgaard, B.; Behrens, Jane

    2013-01-01

    Atlantic cod Gadus morhua experienced oxygen deficit (DO2 ) when exposed to oxygen levels below their critical level (c. 73% of pcrit) and subsequent excess post-hypoxic oxygen consumption (CEPHO) upon return to normoxic conditions, indicative of an oxygen debt. The mean±s.e. CEPHO:DO2 was 6...

  17. Missing Out: Excessive Absenteeism Adversely Affects Elementary Reading Scores

    Hockert, Christine; Harrington, Sonja; Vaughn, Debra; Kelly, Kirk; Gooden, John

    2005-01-01

    This study was designed to answer the question "Does excessive absenteeism affect student academic achievement?" During the 2002-2003 academic year, 188 students attending grades 3 through 5 at an urban Tennessee elementary school with a high poverty level participated in the study. Demographic data were gathered to provide descriptive statistics…

  18. Excessive Alcohol Use and Risks to Women's Health

    ... drink . 5 Excessive drinking may disrupt the menstrual cycle and increase the risk of infertility. 6,7 ... PDF-264KB]. Bethesda, MD: National Institutes on Alcohol Abuse and Alcoholism; June ... Alcohol-related sexual assault: A common problem among college students . J ...

  19. Gamma-ray excess and the minimal dark matter model

    We point out that the gamma-ray excesses in the galactic center and in the dwarf galaxy Reticulum II can both be well explained within the simplest dark matter model. We find that the corresponding region of parameter space will be tested by direct and indirect dark matter searches in the near future.

  20. Supplemental/Replacement: An Alternative Approach to Excess Costs.

    Hartman, William T.

    1990-01-01

    This article proposes a new operational definition of excess cost in determining state and federal funding for special education. The new approach is based on programs and services rather than accounting calculations of the difference between special education cost per student and regular education cost per student. (Author/DB)