WorldWideScience

Sample records for estimating blowout rates

  1. Guidelines for estimating blowout rates and duration for environmental risk analysis purposes; Retningslinjer for beregning av utblaasningsrater og -varighet til bruk ved analyse av miljoerisiko

    Energy Technology Data Exchange (ETDEWEB)

    Nilsen, Thomas

    2007-01-15

    Risk assessment in relation to possible blowouts of oil and condensate is the main topic in this analysis. The estimated risk is evaluated against criteria for acceptable risk and is part of the foundation for dimensioning in oil spill preparedness. The report aims to contribute to the standardisation of the terminology, methodology and documentation for estimations of blowout rates, and thus to simplify communication of the analysis results and strengthen the decision makers' trust in these (ml)

  2. Well blowout rates in California Oil and Gas District 4--Update and Trends

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Preston D.; Benson, Sally M.

    2009-10-01

    Well blowouts are one type of event in hydrocarbon exploration and production that generates health, safety, environmental and financial risk. Well blowouts are variously defined as 'uncontrolled flow of well fluids and/or formation fluids from the wellbore' or 'uncontrolled flow of reservoir fluids into the wellbore'. Theoretically this is irrespective of flux rate and so would include low fluxes, often termed 'leakage'. In practice, such low-flux events are not considered well blowouts. Rather, the term well blowout applies to higher fluxes that rise to attention more acutely, typically in the order of seconds to days after the event commences. It is not unusual for insurance claims for well blowouts to exceed US$10 million. This does not imply that all blowouts are this costly, as it is likely claims are filed only for the most catastrophic events. Still, insuring against the risk of loss of well control is the costliest in the industry. The risk of well blowouts was recently quantified from an assembled database of 102 events occurring in California Oil and Gas District 4 during the period 1991 to 2005, inclusive. This article reviews those findings, updates them to a certain extent and compares them with other well blowout risk study results. It also provides an improved perspective on some of the findings. In short, this update finds that blowout rates have remained constant from 2005 to 2008 within the limits of resolution and that the decline in blowout rates from 1991 to 2005 was likely due to improved industry practice.

  3. Bottom hole blowout preventer

    Energy Technology Data Exchange (ETDEWEB)

    Lineham, D.H.

    1991-04-24

    An automatically controlled ball-valve type bottom-hole blowout preventer is provided for use in drilling oil or gas wells. The blowout preventer of the invention operates under normal drilling conditions in a fully open position with an unrestricted bore. This condition is maintained by a combination of spring and mud flow pressure acting against the upper surfaces of the valve. In the event of a well kick or blowout, pressures from gas or fluid volumes acting against the lower surfaces of the valve force it into the fully closed position. A system of ports and check valves within the blowout preventer forces hydraulic fluid from one chamber to another. The metering effect of these ports determines the rate of closure of the valve, thereby allowing normal running and pulling of the drill string or tubing, without interference to pipe fill-up or drainage, from valve closure. The blowout preventer is placed in a subassembly that is an integral part of the drill string and can be incorporated in a string in any location. 3 figs.

  4. Well blowout rates and consequences in California Oil and Gas District 4 from 1991 to 2005: Implications for geological storage of carbon dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Preston; Jordan, Preston D.; Benson, Sally M.

    2008-05-15

    Well blowout rates in oil fields undergoing thermally enhanced recovery (via steam injection) in California Oil and Gas District 4 from 1991 to 2005 were on the order of 1 per 1,000 well construction operations, 1 per 10,000 active wells per year, and 1 per 100,000 shut-in/idle and plugged/abandoned wells per year. This allows some initial inferences about leakage of CO2 via wells, which is considered perhaps the greatest leakage risk for geological storage of CO2. During the study period, 9% of the oil produced in the United States was from District 4, and 59% of this production was via thermally enhanced recovery. There was only one possible blowout from an unknown or poorly located well, despite over a century of well drilling and production activities in the district. The blowout rate declined dramatically during the study period, most likely as a result of increasing experience, improved technology, and/or changes in safety culture. If so, this decline indicates the blowout rate in CO2-storage fields can be significantly minimized both initially and with increasing experience over time. Comparable studies should be conducted in other areas. These studies would be particularly valuable in regions with CO2-enhanced oil recovery (EOR) and natural gas storage.

  5. Offshore Blowouts, Causes and Trends

    Energy Technology Data Exchange (ETDEWEB)

    Holand, P.

    1996-02-01

    The main objective of this doctoral thesis was to establish an improved design basis for offshore installations with respect to blowout risk analyses. The following sub objectives are defined: (1) Establish an offshore blowout database suitable for risk analyses, (2) Compare the blowout risk related to loss of lives with the total offshore risk and risk in other industries, (3) Analyse blowouts with respect to parameters that are important to describe and quantify blowout risk that has been experienced to be able to answer several questions such as under what operations have blowouts occurred, direct causes, frequency of occurrence etc., (4) Analyse blowouts with respect to trends. The research strategy applied includes elements from both survey strategy and case study strategy. The data are systematized in the form of a new database developed from the MARINTEK database. Most blowouts in the analysed period occurred during drilling operations. Shallow gas blowouts were more frequent than deep blowouts and workover blowouts occurred more often than deep development drilling blowouts. Relatively few blowouts occurred during completion, wireline and normal production activities. No significant trend in blowout occurrences as a function of time could be observed, except for completion blowouts that showed a significantly decreasing trend. But there were trends regarding some important parameters for risk analyses, e.g. the ignition probability has decreased and diverter systems have improved. Only 3.5% of the fatalities occurred because of blowouts. 106 refs., 51 figs., 55 tabs.

  6. Three years of morphologic changes at a bowl blowout, Cape Cod, USA

    Science.gov (United States)

    Smith, Alex; Gares, Paul A.; Wasklewicz, Thad; Hesp, Patrick A.; Walker, Ian J.

    2017-10-01

    This study presents measurements of blowout topography obtained with annual terrestrial laser surveys carried out over a three-year period at a single, large bowl blowout located in the Provincelands Dunes section of Cape Cod National Seashore, in Massachusetts. The study blowout was selected because its axis is aligned with northwest winds that dominate the region, and because it was seemingly interacting with a smaller saucer blowout that had recently formed on the southern rim of the primary feature. Assuming that blowouts enlarge both horizontally and vertically in response to the wind regime, the objectives of the study were to determine both the amount of horizontal growth that the blowout experiences annually and the spatial patterns of vertical change that occur within the blowout. Changes to the blowout lobe surrounding the feature were also determined for areas with sparse enough vegetation cover to allow laser returns from the sand surface. The results show that the blowout consistently expanded outward during the three years, with the greatest expansion occurring at its southeast corner, opposite the prevailing winds. The most significant occurrence was the removal, in the first year, of the ridge that separated the two blowouts, resulting in a major horizontal shift of the southern rim of the new combined blowout. This displacement then continued at a lesser rate in subsequent years. The rim also shifted horizontally along the northwest to northeast sections of the blowout. Significant vertical loss occurred along the main axis of the blowout with the greatest loss concentrated along the southeast rim. On the lobe, there were large areas of deposition immediately downwind of the high erosion zones inside the blowout. However, there were also small erosion areas on the lobe, extending downwind from eroding sections of the rim. This study shows that: 1. blowouts can experience significant areal and volumetric changes in short periods of time; 2

  7. Blowouts are key to the survival of blowout penstemon

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This newspaper article summarizes ongoing conservation and management of blowout penstemon on federal land. The occurrence of this unique plant has never been...

  8. Morphological response of a large-scale coastal blowout to a strong magnitude transport event

    Science.gov (United States)

    Delgado-Fernandez, Irene; Jackson, Derek; Smith, Alexander; Smyth, Thomas

    2017-04-01

    strong magnitude transport event. This allowed, for the first time, examination of the morphological response as a direct result of a high energy wind event as it passes through a large-scale blowout. Results indicate strong steering and acceleration of the wind along the blowout basin and up the south wall opposite to the incident regional winds. These accelerated flows generated very strong transport rates of up to 3 g/s along the basin, and moderate strong transport rates of up to 1.5 g/s up the steep north wall. The coupling of high-frequency wind events and transport response together with topographic changes defined by TLS data allows, for the first time, the ability to co-connect the morphological evolution of a coastal blowout landform with the localised driving processes.

  9. Review of flow rate estimates of the Deepwater Horizon oil spill

    Science.gov (United States)

    McNutt, Marcia K.; Camilli, Rich; Crone, Timothy J.; Guthrie, George D.; Hsieh, Paul A.; Ryerson, Thomas B.; Savas, Omer; Shaffer, Frank

    2012-01-01

    The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ observations of the leaking well with a time-dependent flow rate model derived from pressure readings taken after the Macondo well was shut in for the well integrity test. Model simulations also proved valuable for predicting the effect of partial deployment of the blowout preventer rams on flow rate. Taken together, the scientific analyses support flow rates in the range of ~50,000–70,000 barrels/d, perhaps modestly decreasing over the duration of the oil spill, for a total release of ~5.0 million barrels of oil, not accounting for BP's collection effort. By quantifying the amount of oil at different locations (wellhead, ocean surface, and atmosphere), we conclude that just over 2 million barrels of oil (after accounting for containment) and all of the released methane remained in the deep sea. By better understanding the fate of the hydrocarbons, the total discharge can be partitioned into separate components that pose threats to deep sea vs. coastal ecosystems, allowing responders in future events to scale their actions accordingly.

  10. Flame Propagation and Blowout in Hydrocarbon Jets: Experiments to Understand the Stability and Structure

    Science.gov (United States)

    2012-07-29

    Wilson and Kevin M. Lyons. On Diluted-Fuel Combustion Issues in Burning Biogas Surrogates, ASME-JERT, (12 2009): . doi: 2010/01/07 10:47:38 2 TOTAL...blowout velocities. For each flow configuration, the coflow velocity is divided by the liftoff and blowout velocities to produce a dimensionless ratio...rate of the methane and the nitrogen were regulated with separate rotameters such that the amounts of methane and nitrogen issuing from the fuel pipe

  11. Numerical simulations of the Macondo well blowout reveal strong control of oil flow by reservoir permeability and exsolution of gas.

    Science.gov (United States)

    Oldenburg, Curtis M; Freifeld, Barry M; Pruess, Karsten; Pan, Lehua; Finsterle, Stefan; Moridis, George J

    2012-12-11

    In response to the urgent need for estimates of the oil and gas flow rate from the Macondo well MC252-1 blowout, we assembled a small team and carried out oil and gas flow simulations using the TOUGH2 codes over two weeks in mid-2010. The conceptual model included the oil reservoir and the well with a top boundary condition located at the bottom of the blowout preventer. We developed a fluid properties module (Eoil) applicable to a simple two-phase and two-component oil-gas system. The flow of oil and gas was simulated using T2Well, a coupled reservoir-wellbore flow model, along with iTOUGH2 for sensitivity analysis and uncertainty quantification. The most likely oil flow rate estimated from simulations based on the data available in early June 2010 was about 100,000 bbl/d (barrels per day) with a corresponding gas flow rate of 300 MMscf/d (million standard cubic feet per day) assuming the well was open to the reservoir over 30 m of thickness. A Monte Carlo analysis of reservoir and fluid properties provided an uncertainty distribution with a long tail extending down to 60,000 bbl/d of oil (170 MMscf/d of gas). The flow rate was most strongly sensitive to reservoir permeability. Conceptual model uncertainty was also significant, particularly with regard to the length of the well that was open to the reservoir. For fluid-entry interval length of 1.5 m, the oil flow rate was about 56,000 bbl/d. Sensitivity analyses showed that flow rate was not very sensitive to pressure-drop across the blowout preventer due to the interplay between gas exsolution and oil flow rate.

  12. Impact on water surface due to deepwater gas blowouts.

    Science.gov (United States)

    Premathilake, Lakshitha T; Yapa, Poojitha D; Nissanka, Indrajith D; Kumarage, Pubudu

    2016-11-15

    This paper presents a study on the impact of underwater gas blowouts near the ocean surface, which has a greater relevance to assess Health, Safety, and Environmental risks. In this analysis the gas flux near the surface, reduction of bulk density, and gas surfacing area are studied for different scenarios. The simulations include a matrix of scenarios for different release depths, release rates, and initial bubble size distributions. The simulations are carried out using the MEGADEEP model, for a location in East China Sea. Significant changes in bulk density and gas surface flux near the surface are observed under different release conditions, which can pose a potential threat for cleanup and rescue operations. Furthermore, the effect of hydrate formation on gas surfacing is studied for much greater release depths. The type of outcomes of this study is important to conduct prior risk assessments and contingency planning for underwater gas blowouts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Convex weighting criteria for speaking rate estimation

    Science.gov (United States)

    Jiao, Yishan; Berisha, Visar; Tu, Ming; Liss, Julie

    2015-01-01

    Speaking rate estimation directly from the speech waveform is a long-standing problem in speech signal processing. In this paper, we pose the speaking rate estimation problem as that of estimating a temporal density function whose integral over a given interval yields the speaking rate within that interval. In contrast to many existing methods, we avoid the more difficult task of detecting individual phonemes within the speech signal and we avoid heuristics such as thresholding the temporal envelope to estimate the number of vowels. Rather, the proposed method aims to learn an optimal weighting function that can be directly applied to time-frequency features in a speech signal to yield a temporal density function. We propose two convex cost functions for learning the weighting functions and an adaptation strategy to customize the approach to a particular speaker using minimal training. The algorithms are evaluated on the TIMIT corpus, on a dysarthric speech corpus, and on the ICSI Switchboard spontaneous speech corpus. Results show that the proposed methods outperform three competing methods on both healthy and dysarthric speech. In addition, for spontaneous speech rate estimation, the result show a high correlation between the estimated speaking rate and ground truth values. PMID:26167516

  14. Adaptive Algorithm for Chirp-Rate Estimation

    Directory of Open Access Journals (Sweden)

    Igor Djurović

    2009-01-01

    Full Text Available Chirp-rate, as a second derivative of signal phase, is an important feature of nonstationary signals in numerous applications such as radar, sonar, and communications. In this paper, an adaptive algorithm for the chirp-rate estimation is proposed. It is based on the confidence intervals rule and the cubic-phase function. The window width is adaptively selected to achieve good tradeoff between bias and variance of the chirp-rate estimate. The proposed algorithm is verified by simulations and the results show that it outperforms the standard algorithm with fixed window width.

  15. Resting heart rate estimation using PIR sensors

    Science.gov (United States)

    Kapu, Hemanth; Saraswat, Kavisha; Ozturk, Yusuf; Cetin, A. Enis

    2017-09-01

    In this paper, we describe a non-invasive and non-contact system of estimating resting heart rate (RHR) using a pyroelectric infrared (PIR) sensor. This infrared system monitors and records the chest motion of a subject using the analog output signal of the PIR sensor. The analog output signal represents the composite motion due to inhale-exhale process with magnitude much larger than the minute vibrations of heartbeat. Since the acceleration of the heart activity is much faster than breathing the second derivative of the PIR sensor signal monitoring the chest of the subject is used to estimate the resting heart rate. Experimental results indicate that this ambient sensor can measure resting heart rate with a chi-square significance level of α = 0.05 compared to an industry standard PPG sensor. This new system provides a low cost and an effective way to estimate the resting heart rate, which is an important biological marker.

  16. Flame Color as a Lean Blowout Predictor

    Directory of Open Access Journals (Sweden)

    Rajendra R. Chaudhari

    2013-03-01

    Full Text Available The study characterizes the behavior of a premixed swirl stabilized dump plane combustor flame near its lean blow-out (LBO limit in terms of CH* chemiluminiscence intensity and observable flame color variations for a wide range of equivalence ratio, flow rates and degree of premixing (characterized by premixing length, Lfuel. LPG and pure methane are used as fuel. We propose a novel LBO prediction strategy based solely on the flame color. It is observed that as the flame approaches LBO, its color changes from reddish to blue. This observation is found to be valid for different levels of fuel-air premixing achieved by changing the available mixing length of the air and the fuel upstream of the dump plane although the flame dynamics were significantly different. Based on this observation, the ratio of the intensities of red and blue components of the flame as captured by a color CCD camera was used as a metric for detecting the proximity of the flame to LBO. Tests were carried out for a wide range of air flow rates and using LPG and CH4 as fuel. For all the operating conditions and both fuels tested, this ratio was found to monotonically decrease as LBO was approached. Moreover, the value of this ratio was within a small range close to LBO for all the cases investigated. This makes the ratio suitable as a metric for LBO detection at all levels of premixing.

  17. Numerical simulation of water and sand blowouts when penetrating through shallow water flow formations in deep water drilling

    Science.gov (United States)

    Ren, Shaoran; Liu, Yanmin; Gong, Zhiwu; Yuan, Yujie; Yu, Lu; Wang, Yanyong; Xu, Yan; Deng, Junyu

    2018-02-01

    In this study, we applied a two-phase flow model to simulate water and sand blowout processes when penetrating shallow water flow (SWF) formations during deepwater drilling. We define `sand' as a pseudo-component with high density and viscosity, which can begin to flow with water when a critical pressure difference is attained. We calculated the water and sand blowout rates and analyzed the influencing factors from them, including overpressure of the SWF formation, as well as its zone size, porosity and permeability, and drilling speed (penetration rate). The obtained data can be used for the quantitative assessment of the potential severity of SWF hazards. The results indicate that overpressure of the SWF formation and its zone size have significant effects on SWF blowout. A 10% increase in the SWF formation overpressure can result in a more than 90% increase in the cumulative water blowout and a 150% increase in the sand blowout when a typical SWF sediment is drilled. Along with the conventional methods of well flow and pressure control, chemical plugging, and the application of multi-layer casing, water and sand blowouts can be effectively reduced by increasing the penetration rate. As such, increasing the penetration rate can be a useful measure for controlling SWF hazards during deepwater drilling.

  18. 16 CFR 1507.6 - Burnout and blowout.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Burnout and blowout. 1507.6 Section 1507.6... FIREWORKS DEVICES § 1507.6 Burnout and blowout. The pyrotechnic chamber in fireworks devices shall be constructed in a manner to allow functioning in a normal manner without burnout or blowout. ...

  19. Data Fusion for Improved Respiration Rate Estimation

    OpenAIRE

    Malhotra Atul; Nemati Shamim; Clifford GariD

    2010-01-01

    We present an application of a modified Kalman-Filter (KF) framework for data fusion to the estimation of respiratory rate from multiple physiological sources which is robust to background noise. A novel index of the underlying signal quality of respiratory signals is presented and then used to modify the noise covariance matrix of the KF which discounts the effect of noisy data. The signal quality index, together with the KF innovation sequence, is also used to weight multiple independent e...

  20. Towards universal hybrid star formation rate estimators

    Science.gov (United States)

    Boquien, M.; Kennicutt, R.; Calzetti, D.; Dale, D.; Galametz, M.; Sauvage, M.; Croxall, K.; Draine, B.; Kirkpatrick, A.; Kumari, N.; Hunt, L.; De Looze, I.; Pellegrini, E.; Relaño, M.; Smith, J.-D.; Tabatabaei, F.

    2016-06-01

    Context. To compute the star formation rate (SFR) of galaxies from the rest-frame ultraviolet (UV), it is essential to take the obscuration by dust into account. To do so, one of the most popular methods consists in combining the UV with the emission from the dust itself in the infrared (IR). Yet, different studies have derived different estimators, showing that no such hybrid estimator is truly universal. Aims: In this paper we aim at understanding and quantifying what physical processes fundamentally drive the variations between different hybrid estimators. In so doing, we aim at deriving new universal UV+IR hybrid estimators to correct the UV for dust attenuation at local and global scales, taking the intrinsic physical properties of galaxies into account. Methods: We use the CIGALE code to model the spatially resolved far-UV to far-IR spectral energy distributions of eight nearby star-forming galaxies drawn from the KINGFISH sample. This allows us to determine their local physical properties, and in particular their UV attenuation, average SFR, average specific SFR (sSFR), and their stellar mass. We then examine how hybrid estimators depend on said properties. Results: We find that hybrid UV+IR estimators strongly depend on the stellar mass surface density (in particular at 70 μm and 100 μm) and on the sSFR (in particular at 24 μm and the total infrared). Consequently, the IR scaling coefficients for UV obscuration can vary by almost an order of magnitude: from 1.55 to 13.45 at 24 μm for instance. This result contrasts with other groups who found relatively constant coefficients with small deviations. We exploit these variations to construct a new class of adaptative hybrid estimators based on observed UV to near-IR colours and near-IR luminosity densities per unit area. We find that they can reliably be extended to entire galaxies. Conclusions: The new estimators provide better estimates of attenuation-corrected UV emission than classical hybrid estimators

  1. Surgical timing of the orbital "blowout" fracture

    DEFF Research Database (Denmark)

    Damgaard, Olaf Ehlers; Larsen, Christian Grønhøj; Felding, Ulrik Ascanius

    2016-01-01

    Objective: The orbital blowout fracture is a common facial injury, carrying with it a risk of visual impairment and undesirable cosmetic results unless treated properly. Optimal timing of the surgical treatment is still a matter of debate. We set out to determine whether a meta-analysis would bri...

  2. Gastric blow-out: komplikation efter fedmekirurgi

    DEFF Research Database (Denmark)

    Torrens, Ayoe Sabrina; Born, Pernille Wolder; Naver, Lars

    2009-01-01

    Laparoscopic gastric bypass is the most common type of surgery for morbid obesity in Denmark. The most frequent late complications after gastric bypass are ulcer, internal hernia and stenosis. Two cases of stenosis of the bileopancreatic limb with gastric blow-out are described. Urgent diagnosis...

  3. Estimating Glomerular Filtration Rate in Older People

    Directory of Open Access Journals (Sweden)

    Sabrina Garasto

    2014-01-01

    Full Text Available We aimed at reviewing age-related changes in kidney structure and function, methods for estimating kidney function, and impact of reduced kidney function on geriatric outcomes, as well as the reliability and applicability of equations for estimating glomerular filtration rate (eGFR in older patients. CKD is associated with different comorbidities and adverse outcomes such as disability and premature death in older populations. Creatinine clearance and other methods for estimating kidney function are not easy to apply in older subjects. Thus, an accurate and reliable method for calculating eGFR would be highly desirable for early detection and management of CKD in this vulnerable population. Equations based on serum creatinine, age, race, and gender have been widely used. However, these equations have their own limitations, and no equation seems better than the other ones in older people. New equations specifically developed for use in older populations, especially those based on serum cystatin C, hold promises. However, further studies are needed to definitely accept them as the reference method to estimate kidney function in older patients in the clinical setting.

  4. Revisiting the Estimation of Dinosaur Growth Rates

    Science.gov (United States)

    Myhrvold, Nathan P.

    2013-01-01

    Previous growth-rate studies covering 14 dinosaur taxa, as represented by 31 data sets, are critically examined and reanalyzed by using improved statistical techniques. The examination reveals that some previously reported results cannot be replicated by using the methods originally reported; results from new methods are in many cases different, in both the quantitative rates and the qualitative nature of the growth, from results in the prior literature. Asymptotic growth curves, which have been hypothesized to be ubiquitous, are shown to provide best fits for only four of the 14 taxa. Possible reasons for non-asymptotic growth patterns are discussed; they include systematic errors in the age-estimation process and, more likely, a bias toward younger ages among the specimens analyzed. Analysis of the data sets finds that only three taxa include specimens that could be considered skeletally mature (i.e., having attained 90% of maximum body size predicted by asymptotic curve fits), and eleven taxa are quite immature, with the largest specimen having attained less than 62% of predicted asymptotic size. The three taxa that include skeletally mature specimens are included in the four taxa that are best fit by asymptotic curves. The totality of results presented here suggests that previous estimates of both maximum dinosaur growth rates and maximum dinosaur sizes have little statistical support. Suggestions for future research are presented. PMID:24358133

  5. Rate matrix estimation from site frequency data.

    Science.gov (United States)

    Burden, Conrad J; Tang, Yurong

    2017-02-01

    A procedure is described for estimating evolutionary rate matrices from observed site frequency data. The procedure assumes (1) that the data are obtained from a constant size population evolving according to a stationary Wright-Fisher or decoupled Moran model; (2) that the data consist of a multiple alignment of a moderate number of sequenced genomes drawn randomly from the population; and (3) that within the genome a large number of independent, neutral sites evolving with a common mutation rate matrix can be identified. No restrictions are imposed on the scaled rate matrix other than that the off-diagonal elements are positive, their sum is ≪1, and that the rows of the matrix sum to zero. In particular the rate matrix is not assumed to be reversible. The key to the method is an approximate stationary solution to the diffusion limit, forward Kolmogorov equation for neutral evolution in the limit of low mutation rates. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Femoral blowout in a case of Carcinoma Penis

    Directory of Open Access Journals (Sweden)

    Nikhil Panse

    2012-01-01

    Full Text Available There is considerable literature on the potential for a femoral blowout in case of fungating inguinal lymph nodes in a case of penile carcinoma. However, reported cases of actual femoral blowout are sparse in literature. We encountered one such case of femoral blowout because of fungating inguinal lymph nodes in a case of Ca.Penis. Emergency palliative resection of the fungating nodes, ligation of the femoral vein, and emergency flap cover in the form of a perforator-based anterolateral thigh flap was performed. We believe that patients with a potential of femoral blowout should undergo resection and suitable coverage to prevent fatal hemorrhage.

  7. Estimation of social discount rate for Lithuania

    Directory of Open Access Journals (Sweden)

    Vilma Kazlauskiene

    2016-09-01

    Full Text Available Purpose of the article: The paper seeks to analyse the problematics of estimation of the social discount rate (SDR. The SDR is the critical parameter of cost-benefit analysis, which allows calculating the present value of cost and the benefit of public sector investment projects. Incorrect choice of the SDR can lead to the realisation of ineffective public project or conversely, cost-effective project will be rejected. The relevance of this problem analysis is determined by discussions and different viewpoints of scientists on the choice of the most appropriate approach to determine the SDR and absence of methodically based the SDR on the national level of Lithuania. Methodology/methods: The research is performed by the scientific and methodical literature analysis, systematization, time series and regression analysis. Scientific aim: The aim of the article is to calculate the SDR based on the statistical data of Lithuania. Findings: The analysis of methods of SDR determination, as well as the researches performed by foreign researchers, allows stating that the social rate of time preference (SRTP approach is the most appropriate. The SDR, calculated by the SRTP approach, reflects the main purpose of public investment projects, i.e. to enhance social benefit for society, the best. The analyses of SDR determination practice of the foreign countries shows that the SDR level should not be universal for all states. Each country should calculate the SDR based on its own data and apply it for the assessment of public projects. Conclusions: The calculated SDR for Lithuania using the SRTP approach varies between 3.5 % and 4.3 %. Although it is lower than 5 % that is offered by European Commission, this rate is based on the statistical data of Lithuania and should be used for the assessment of the national public projects. Application of the reasonable SDR let get the more accurate and reliable cost-benefit analysis of the public projects.

  8. Emissions due to the natural gas storage well-casing blowout at Aliso Canyon/SS-25

    Science.gov (United States)

    Herndon, Scott; Daube, Conner; Jervis, Dylan; Yacovitch, Tara; Roscioli, Joseph; Curry, Jason; Nelson, David; Knighton, Berk

    2017-04-01

    The pronounced increase in unconventional gas production in North America over the last fifteen years has intensified interest in understanding emissions and leaks in the supply chain from well pad to end use. Los Angeles, California is home 19 million consumers of natural gas in both industry and domestic end use. The well blowout at Aliso Canyon Natural Gas Storage Facility in the greater Los Angeles area was quantified using the tracer flux ratio method (TFR). Over 400 tracer plume transects were collected, each lasting 15-300 seconds, using instrumentation aboard a mobile platform on 25 days between December 21, 2015 and March 9, 2016. The leak rate from October 23rd to February 11th has been estimated here using a combination of this work (TFR) and the flight mass balance (FMB) data [Conley et al., 2016]. This estimate relies on the TFR data as the most specific SS-25 emission dataset. Scaling the FMB dataset, the leak rate is projected from Oct 23rd to December 21th. Adding up the emissions inferred and measured suggests a total leak burden of 86,022 ± 8,393 metric tons of methane. This work quantified the emissions during the "bottom kill" procedure which halted the primary emission leak. The ethane to methane enhancement ratio observed downwind of the leak site is consistent with the content of ethane in the natural gas at this site and provides definitive evidence that the methane emission rate quantified via tracer flux ratio is not due to a nearby landfill or other potential biogenic sources. Additionally, the TFR approach employed here is assessing only the leaks due to the SS-25 well blowout and excludes other possible emissions at the facility.

  9. Simulated data supporting inbreeding rate estimates from incomplete pedigrees

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data release includes: (1) The data from simulations used to illustrate the behavior of inbreeding rate estimators. Estimating inbreeding rates is particularly...

  10. Spatial-temporal evolution of aeolian blowout dunes at Cape Cod

    Science.gov (United States)

    Abhar, Kimia C.; Walker, Ian J.; Hesp, Patrick A.; Gares, Paul A.

    2015-05-01

    This paper explores historical evolution of blowouts at Cape Cod National Seashore (CCNS), USA - a site that hosts one of the world's highest densities of active and stabilized blowouts. The Spatial-Temporal Analysis of Moving Polygons (STAMP) method is applied to a multi-decadal dataset of aerial photography and LiDAR to extract patterns of two-dimensional movement and morphometric changes in erosional deflation basins and depositional lobes. Blowout development in CCNS is characterized by several geometric (overlap) and movement (proximity) responses, including: i) generation and disappearance, ii) extension and contraction, iii) union or division, iv) clustering and v) divergence by stabilization. Other possible movement events include migration, amalgamation and proximal stabilization, but they were not observed in this study. Generation events were more frequent than disappearance events; the former were highest between 1985 and 1994, while the latter were highest between 2000 and 2005. High rates of areal change in erosional basins occurred between 1998 and 2000 (+ 3932 m2 a-1), the lowest rate (+ 333 m2 a-1) between 2005 and 2009, and the maximum rate (+ 4589 m2 a-1) between 2009 and 2011. Union events occurred mostly in recent years (2000-2012), while only one division was observed earlier (1985-1994). Net areal changes of lobes showed gradual growth from a period of contraction (- 1119 m2 a-1) between 1998 and 2000 to rapid extension (+ 2030 m2 a-1) by 2010, which is roughly concurrent with rapid growth of erosional basins between 2005 and 2009. Blowouts extended radially in this multi-modal wind regime and, despite odd shapes initially, they became simpler in form (more circular) and larger over time. Net extension of erosional basins was toward ESE (109°) while depositional lobes extended SSE (147°). Lobes were aligned with the strongest (winter) sand drift vector although their magnitude of areal extension was only 33% that of the basins. These

  11. Angular-Rate Estimation Using Quaternion Measurements

    Science.gov (United States)

    Azor, Ruth; Bar-Itzhack, Y.; Deutschmann, Julie K.; Harman, Richard R.

    1998-01-01

    In most spacecraft (SC) there is a need to know the SC angular rate. Precise angular rate is required for attitude determination, and a coarse rate is needed for attitude control damping. Classically, angular rate information is obtained from gyro measurements. These days, there is a tendency to build smaller, lighter and cheaper SC, therefore the inclination now is to do away with gyros and use other means and methods to determine the angular rate. The latter is also needed even in gyro equipped satellites when performing high rate maneuvers whose angular-rate is out of range of the on board gyros or in case of gyro failure. There are several ways to obtain the angular rate in a gyro-less SC. When the attitude is known, one can differentiate the attitude in whatever parameters it is given and use the kinematics equation that connects the derivative of the attitude with the satellite angular-rate and compute the latter. Since SC usually utilize vector measurements for attitude determination, the differentiation of the attitude introduces a considerable noise component in the computed angular-rate vector.

  12. Estimation of Warfighter Resting Metabolic Rate

    Science.gov (United States)

    2005-04-14

    intake and activity (Aschoff and Pohl, 1970). In a constant environment during the follicular phase of the menstrual cycle , women show a circadian...Dubois Formula BMI Body Mass Index VO2max Maximum Rate of Oxygen Uptake 1/min RMR Resting Metabolic Rate kcal/h RMRs Standard Resting Metabolic Rate...60% of VO2max the preceding day) and that there was an almost linear-dose response relationship with no evidence of a threshold. Thompson and Manore

  13. Sibling rivalry : Estimating cannibalization rates for innovations

    NARCIS (Netherlands)

    van Heerde, H.J.; Srinivasan, S.; Dekimpe, M.G.

    2012-01-01

    To evaluate the success of a new product, managers need to determine how much of its new demand is due to cannibalizing the company’s other products, rather than drawing from competition or generating primary demand. A new model allows managers to estimate cannibalization effects and to calculate

  14. Inline Repair of Blowouts During Laser Welding

    Science.gov (United States)

    Hansen, K. S.; Olsen, F. O.; Kristiansen, M.; Madsen, O.

    In a current laser welding production process of components of stainless steel, a butt joint configuration may lead to failures in the form of blowouts, causing an unacceptable welding quality. A study to improve the laser welding process was performed with the aim of solving the problem by designing a suitable pattern of multiple small laser spots rather than a single spot in the process zone. The blowouts in the process are provoked by introducing small amounts of zinc powder in the butt joint. When the laser heats up the zinc, this rapidly evaporates and expands, leaving the melt pool to be blown away locally. Multiple spot pattern designs are tested. Spot patterns are produced by applying diffractive optics to a beam from a single mode fiber laser. Results from welding while applying spot patterns both with and without trailing spots are presented. Data showing the power ratio between a trailing spot and two main spots as a function of spot distance is also presented. The results of the study show that applying multiple spots in the welding process may improve the process stability when welding materials with small impurities in the form of zinc particles.

  15. Blowout recovery operations : the capping operation

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.; Badick, M. [Safety BOSS, Calgary, AB (Canada)

    2001-07-01

    Capping is generally the final work done at a wellhead. A properly planned capping operation is one of the easiest aspects of a recovery process that should be completed in one or two days. The objective is to establish a safe and reliable casing attachment and seal that is designed to suit the activities that follow. The movement of equipment on and off the wellhead is among the higher risk objectives of a recovery operation, so it is crucial that the job be done right the first time. This paper focused on blowout recovery operations and covered the operations from the point where a sound pipe or a reliable casing flange has been established to the point of installing a conventional wellhead, a diverter system blowout preventer (BOP) stack suitable for shutting in the well, continuing to flow the well, or proceeding with a killing operation. The advantages and disadvantages of many capping options were discussed along with the procedures for installing a wellhead or BOP. The choices for wellheads, diverter systems and BOP stack configurations depend on whether the well can be shut-in, killed or if it must be flowed while an offset well is drilled. The choices presented in this paper included, slip rams, casing bowls, and snubbing.

  16. Nonparametric conditional hazard rate estimation: A local linear approach

    NARCIS (Netherlands)

    Spierdijk, L.

    A new nonparametric estimator for the conditional hazard rate is proposed, which is defined as the ratio of local linear estimators for the conditional density and survivor function. The resulting hazard rate estimator is shown to be pointwise consistent and asymptotically normally distributed under

  17. Nonparametric conditional hazard rate estimation : A local linear approach

    NARCIS (Netherlands)

    Spierdijk, L.

    2008-01-01

    A new nonparametric estimator for the conditional hazard rate is proposed, which is defined as the ratio of local linear estimators for the conditional density and survivor function. The resulting hazard rate estimator is shown to be pointwise consistent and asymptotically normally distributed under

  18. Estimation of transition probabilities of credit ratings

    Science.gov (United States)

    Peng, Gan Chew; Hin, Pooi Ah

    2015-12-01

    The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.

  19. Simulated data supporting inbreeding rate estimates from incomplete pedigrees

    Science.gov (United States)

    Miller, Mark P.

    2017-01-01

    This data release includes:(1) The data from simulations used to illustrate the behavior of inbreeding rate estimators. Estimating inbreeding rates is particularly difficult for natural populations because parentage information for many individuals may be incomplete. Our analyses illustrate the behavior of a newly-described inbreeding rate estimator that outperforms previously described approaches in the scientific literature.(2) Python source code ("analytical expressions", "computer simulations", and "empricial data set") that can be used to analyze these data.

  20. Inaccuracy and Uncertainty in Estimates of College Student Suicide Rates.

    Science.gov (United States)

    Schwartz, Allan J.

    1980-01-01

    Innacurate sample data and uncertain estimates are defined as obstacles to assessing the suicide rate among college students. A standardization of research and reporting services is recommended. (JMF)

  1. Development of 3000 m Subsea Blowout Preventer Experimental Prototype

    Science.gov (United States)

    Cai, Baoping; Liu, Yonghong; Huang, Zhiqian; Ma, Yunpeng; Zhao, Yubin

    2017-12-01

    A subsea blowout preventer experimental prototype is developed to meet the requirement of training operators, and the prototype consists of hydraulic control system, electronic control system and small-sized blowout preventer stack. Both the hydraulic control system and the electronic system are dual-mode redundant systems. Each system works independently and is switchable when there are any malfunctions. And it significantly improves the operation reliability of the equipment.

  2. On the estimation of spread rate for a biological population

    Science.gov (United States)

    Jim Clark; Lajos Horváth; Mark Lewis

    2001-01-01

    We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.

  3. Fluid Mechanics of Lean Blowout Precursors in Gas Turbine Combustors

    Directory of Open Access Journals (Sweden)

    T. M. Muruganandam

    2012-03-01

    Full Text Available Understanding of lean blowout (LBO phenomenon, along with the sensing and control strategies could enable the gas turbine combustor designers to design combustors with wider operability regimes. Sensing of precursor events (temporary extinction-reignition events based on chemiluminescence emissions from the combustor, assessing the proximity to LBO and using that data for control of LBO has already been achieved. This work describes the fluid mechanic details of the precursor dynamics and the blowout process based on detailed analysis of near blowout flame behavior, using simultaneous chemiluminescence and droplet scatter observations. The droplet scatter method represents the regions of cold reactants and thus help track unburnt mixtures. During a precursor event, it was observed that the flow pattern changes significantly with a large region of unburnt mixture in the combustor, which subsequently vanishes when a double/single helical vortex structure brings back the hot products back to the inlet of the combustor. This helical pattern is shown to be the characteristic of the next stable mode of flame in the longer combustor, stabilized by double helical vortex breakdown (VBD mode. It is proposed that random heat release fluctuations near blowout causes VBD based stabilization to shift VBD modes, causing the observed precursor dynamics in the combustor. A complete description of the evolution of flame near the blowout limit is presented. The description is consistent with all the earlier observations by the authors about precursor and blowout events.

  4. Estimating forest conversion rates with annual forest inventory data

    Science.gov (United States)

    Paul C. Van Deusen; Francis A. Roesch

    2009-01-01

    The rate of land-use conversion from forest to nonforest or natural forest to forest plantation is of interest for forest certification purposes and also as part of the process of assessing forest sustainability. Conversion rates can be estimated from remeasured inventory plots in general, but the emphasis here is on annual inventory data. A new estimator is proposed...

  5. Estimating fractional rate of NDF degradation from in vivo digestibility

    DEFF Research Database (Denmark)

    Weisbjerg, Martin Riis; Søegaard, Karen; Lund, Peter

    2007-01-01

    Fractional rate of degradation (kd) of potential degradable NDF (dNDF) was estimated based on in situ degradation profiles.......Fractional rate of degradation (kd) of potential degradable NDF (dNDF) was estimated based on in situ degradation profiles....

  6. Prolonged decay of molecular rate estimates for metazoan mitochondrial DNA

    Directory of Open Access Journals (Sweden)

    Martyna Molak

    2015-03-01

    Full Text Available Evolutionary timescales can be estimated from genetic data using the molecular clock, often calibrated by fossil or geological evidence. However, estimates of molecular rates in mitochondrial DNA appear to scale negatively with the age of the clock calibration. Although such a pattern has been observed in a limited range of data sets, it has not been studied on a large scale in metazoans. In addition, there is uncertainty over the temporal extent of the time-dependent pattern in rate estimates. Here we present a meta-analysis of 239 rate estimates from metazoans, representing a range of timescales and taxonomic groups. We found evidence of time-dependent rates in both coding and non-coding mitochondrial markers, in every group of animals that we studied. The negative relationship between the estimated rate and time persisted across a much wider range of calibration times than previously suggested. This indicates that, over long time frames, purifying selection gives way to mutational saturation as the main driver of time-dependent biases in rate estimates. The results of our study stress the importance of accounting for time-dependent biases in estimating mitochondrial rates regardless of the timescale over which they are inferred.

  7. FRAGS: estimation of coding sequence substitution rates from fragmentary data

    Directory of Open Access Journals (Sweden)

    Seoighe Cathal

    2004-01-01

    Full Text Available Abstract Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data.

  8. Angular Rate Estimation Using a Distributed Set of Accelerometers

    Directory of Open Access Journals (Sweden)

    Sung Kyung Hong

    2011-11-01

    Full Text Available A distributed set of accelerometers based on the minimum number of 12 accelerometers allows for computation of the magnitude of angular rate without using the integration operation. However, it is not easy to extract the magnitude of angular rate in the presence of the accelerometer noises, and even worse, it is difficult to determine the direction of a rotation because the angular rate is present in its quadratic form within the inertial measurement system equations. In this paper, an extended Kalman filter scheme to correctly estimate both the direction and magnitude of the angular rate through fusion of the angular acceleration and quadratic form of the angular rate is proposed. We also provide observability analysis for the general distributed accelerometers-based inertial measurement unit, and show that the angular rate can be correctly estimated by general nonlinear state estimators such as an extended Kalman filter, except under certain extreme conditions.

  9. Simplifying cardiovascular risk estimation using resting heart rate.

    LENUS (Irish Health Repository)

    Cooney, Marie Therese

    2010-09-01

    Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.

  10. Estimating individual rates of discount: A meta-analysis

    NARCIS (Netherlands)

    Percoco, M.; Nijkamp, P.

    2009-01-01

    In this article, we present the results from a meta-analysis conducted over 44 experimental and field studies, which report individual discount rate estimates. We find in our research that the experimental design of a study has a decisive impact on these estimates, and conclude that meta-analysis,

  11. A speech rate estimator using hidden markov models - biomed 2010.

    Science.gov (United States)

    Mujumdar, Monali V; Kubichek, Robert F

    2010-01-01

    Measures such as speaking rate and articulation rate are commonly used in the assessment of various speech disorders. Such measures are also used to adapt automatic speech recognition systems to avoid performance degradation encountered with high speech rates. This paper illustrates a hidden Markov model-based method to estimate the number of syllables, which can further be used to estimate speaking rate or articulation rate. The method uses the Viterbistate sequence for syllable estimation. The algorithm wastes tedon two datasets; the TIMIT dataset achieved an error of 14.6% and correlation of 0.81, while the Switchboard data yielded an error of 21.2% and correlation of 0.88.

  12. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  13. Estimating treatment rates for mental disorders in Australia.

    Science.gov (United States)

    Whiteford, Harvey A; Buckingham, William J; Harris, Meredith G; Burgess, Philip M; Pirkis, Jane E; Barendregt, Jan J; Hall, Wayne D

    2014-02-01

    To estimate the percentage of Australians with a mental disorder who received treatment for that disorder each year between 2006-07 and 2009-10. We used: (1) epidemiological survey data to estimate the number of Australians with a mental disorder in any year; (2) a combination of administrative data on people receiving mental health care from the Commonwealth and State and Territories and epidemiological data to estimate the number receiving treatment; and (3) uncertainty modelling to estimate the effects of sampling error and assumptions on these estimates. The estimated population treatment rate for mental disorders in Australia increased from 37% in 2006-07 to 46% in 2009-10. The model estimate for 2006-07 (37%) was very similar to the estimated treatment rate in the 2007 National Survey of Mental Health and Wellbeing (35%), the only data available for external comparison. The uncertainty modelling suggested that the increased treatment rates over subsequent years could not be explained by sampling error or uncertainty in assumptions. The introduction of the Commonwealth's Better Access initiative in November 2006 has been the driver for the increased the proportion of Australians with mental disorders who received treatment for those disorders over the period from 2006-07 to 2009-10. WHAT IS KNOWN ABOUT THE TOPIC? Untreated mental disorders incur major economic costs and personal suffering. Governments need timely estimates of treatment rates to assess the effects of policy changes aimed at improving access to mental health services. WHAT DOES THIS PAPER ADD? Drawing upon a combination of epidemiological and administrative data sources, the present study estimated that the population treatment rate for mental disorders in Australia increased significantly from 37% in 2006-07 to 46% in 2009-10. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? Increased access to services is not sufficient to ensure good outcomes for those with mental disorders. It is also important

  14. Moment estimation of customer loss rates from transactional data

    Directory of Open Access Journals (Sweden)

    D. J. Daley

    1998-01-01

    Full Text Available Moment estimators are proposed for the arrival and customer loss rates of a many-server queueing system with a Poisson arrival process with customer loss via balking or reneging. These estimators are based on the lengths {Sj1} of the initial inter-departure intervals of the busy periods j=1,…,M observed in a dataset consisting of service starting and finishing times and encompassing both busy and idle periods of the process, and whether those busy periods are of length 1 or >1. The estimators are compared with maximum likelihood and parametric model-based estimators found previously.

  15. Estimating Soil Loss Rates For Soil Conservation Planning in ...

    African Journals Online (AJOL)

    The rate of soil erosion is pervasive in the highlands of Ethiopia. Soil conservation is thus crucial in these areas to tackle the prevailing soil erosion. This area is mainly in the steeper slope banks of tributaries where steep lands are cultivated or overgrazed. The objective of this study is to estimate the rate of soil loss in ...

  16. Estimated Interest Rate Rules: Do they Determine Determinacy Properties?

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...

  17. Estimating stutter rates for Y-STR alleles

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Olofsson, Jill Katharina; Mogensen, Helle Smidt

    2011-01-01

    Stutter peaks are artefacts that arise during PCR amplification of short tandem repeats. Stutter peaks are especially important in forensic case work with DNA mixtures. The aim of the study was primarily to estimate the stutter rates of the AmpFlSTR Yfiler kit. We found that the stutter rates...

  18. Stability Control of Vehicle Emergency Braking with Tire Blowout

    Directory of Open Access Journals (Sweden)

    Qingzhang Chen

    2014-01-01

    Full Text Available For the stability control and slowing down the vehicle to a safe speed after tire failure, an emergency automatic braking system with independent intellectual property is developed. After the system has received a signal of tire blowout, the automatic braking mode of the vehicle is determined according to the position of the failure tire and the motion state of vehicle, and a control strategy for resisting tire blowout additional yaw torque and deceleration is designed to slow down vehicle to a safe speed in an expected trajectory. The simulating test system is also designed, and the testing results show that the vehicle can be quickly stabilized and kept in the original track after tire blowout with the emergency braking system described in the paper.

  19. Estimation of Saturation Flow Rates at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2012-01-01

    Full Text Available The saturation flow rate is a fundamental parameter to measure the intersection capacity and time the traffic signals. However, it is revealed that traditional methods which are mainly developed using the average value of observed queue discharge headways to estimate the saturation headway might lead to underestimate saturation flow rate. The goal of this paper is to study the stochastic nature of queue discharge headways and to develop a more accurate estimate method for saturation headway and saturation flow rate. Based on the surveyed data, the characteristics of queue discharge headways and the estimation method of saturated flow rate are studied. It is found that the average value of queue discharge headways is greater than the median value and that the skewness of the headways is positive. Normal distribution tests were conducted before and after a log transformation of the headways. The goodness-of-fit test showed that for some surveyed sites, the queue discharge headways can be fitted by the normal distribution and for other surveyed sites, the headways can be fitted by lognormal distribution. According to the queue discharge headway characteristics, the median value of queue discharge headways is suggested to estimate the saturation headway and a new method of estimation saturation flow rates is developed.

  20. Vehicle Yaw Rate Estimation Using a Virtual Sensor

    Directory of Open Access Journals (Sweden)

    Mümin Tolga Emirler

    2013-01-01

    Full Text Available Road vehicle yaw stability control systems like electronic stability program (ESP are important active safety systems used for maintaining lateral stability of the vehicle. Vehicle yaw rate is the key parameter that needs to be known by a yaw stability control system. In this paper, yaw rate is estimated using a virtual sensor which contains kinematic relations and a velocity-scheduled Kalman filter. Kinematic estimation is carried out using wheel speeds, dynamic tire radius, and front wheel steering angle. In addition, a velocity-scheduled Kalman filter utilizing the linearized single-track model of the road vehicle is used in the dynamic estimation part of the virtual sensor. The designed virtual sensor is successfully tested offline using a validated, high degrees of freedom, and high fidelity vehicle model and using hardware-in-the-loop simulations. Moreover, actual road testing is carried out and the estimated yaw rate from the virtual sensor is compared with the actual yaw rate obtained from the commercial yaw rate sensor to demonstrate the effectiveness of the virtual yaw rate sensor in practical use.

  1. Respiratory rate estimation during triage of children in hospitals.

    Science.gov (United States)

    Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel

    2015-01-01

    Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.

  2. A Bayesian approach to estimating the prehepatic insulin secretion rate

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    by a Bayesian approach where efficient posterior sampling is made available through the use of Markov chain Monte Carlo methods. Hereby the ill-posed estimation problem inherited in the coupled differential equation model is regularized by the use of prior knowledge. The method is demonstrated on experimental...... for the estimation of the prehepatic insulin secretion rate. We consider a stochastic differential equation model that combines both insulin and C-peptide concentrations in plasma to estimate the prehepatic insulin secretion rate. Previously this model has been analysed in an iterative deterministic set-up, where...... the time courses of insulin and C-peptide subsequently are used as known forcing functions. In this work we adopt a Bayesian graphical model to describe the unied model simultaneously. We develop a model that also accounts for both measurement error and process variability. The parameters are estimated...

  3. The rise and fall of methanotrophy following a deepwater oil-well blowout

    Science.gov (United States)

    Crespo-Medina, M.; Meile, C. D.; Hunter, K. S.; Diercks, A.-R.; Asper, V. L.; Orphan, V. J.; Tavormina, P. L.; Nigro, L. M.; Battles, J. J.; Chanton, J. P.; Shiller, A. M.; Joung, D.-J.; Amon, R. M. W.; Bracco, A.; Montoya, J. P.; Villareal, T. A.; Wood, A. M.; Joye, S. B.

    2014-06-01

    The blowout of the Macondo oil well in the Gulf of Mexico in April 2010 injected up to 500,000 tonnes of natural gas, mainly methane, into the deep sea. Most of the methane released was thought to have been consumed by marine microbes between July and August 2010. Here, we report spatially extensive measurements of methane concentrations and oxidation rates in the nine months following the spill. We show that although gas-rich deepwater plumes were a short-lived feature, water column concentrations of methane remained above background levels throughout the rest of the year. Rates of microbial methane oxidation peaked in the deepwater plumes in May and early June, coincident with a rapid rise in the abundance of known and new methane-oxidizing microbes. At this time, rates of methane oxidation reached up to 5,900 nmol l-1 d-1--the highest rates documented in the global pelagic ocean before the blowout. Rates of methane oxidation fell to less than 50 nmol l-1 d-1 in late June, and continued to decline throughout the remainder of the year. We suggest the precipitous drop in methane consumption in late June, despite the persistence of methane in the water column, underscores the important role that physiological and environmental factors play in constraining the activity of methane-oxidizing bacteria in the Gulf of Mexico.

  4. Traditional waveform based spike sorting yields biased rate code estimates.

    Science.gov (United States)

    Ventura, Valérie

    2009-04-28

    Much of neuroscience has to do with relating neural activity and behavior or environment. One common measure of this relationship is the firing rates of neurons as functions of behavioral or environmental parameters, often called tuning functions and receptive fields. Firing rates are estimated from the spike trains of neurons recorded by electrodes implanted in the brain. Individual neurons' spike trains are not typically readily available, because the signal collected at an electrode is often a mixture of activities from different neurons and noise. Extracting individual neurons' spike trains from voltage signals, which is known as spike sorting, is one of the most important data analysis problems in neuroscience, because it has to be undertaken prior to any analysis of neurophysiological data in which more than one neuron is believed to be recorded on a single electrode. All current spike-sorting methods consist of clustering the characteristic spike waveforms of neurons. The sequence of first spike sorting based on waveforms, then estimating tuning functions, has long been the accepted way to proceed. Here, we argue that the covariates that modulate tuning functions also contain information about spike identities, and that if tuning information is ignored for spike sorting, the resulting tuning function estimates are biased and inconsistent, unless spikes can be classified with perfect accuracy. This means, for example, that the commonly used peristimulus time histogram is a biased estimate of the firing rate of a neuron that is not perfectly isolated. We further argue that the correct conceptual way to view the problem out is to note that spike sorting provides information about rate estimation and vice versa, so that the two relationships should be considered simultaneously rather than sequentially. Indeed we show that when spike sorting and tuning-curve estimation are performed in parallel, unbiased estimates of tuning curves can be recovered even from

  5. Bayes estimation of the mixture of hazard-rate model

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, K.K.; Krishna, Hare; Singh, Bhupendra

    1997-01-01

    Engineering systems are subject to continuous stresses and shocks which may (or may not) cause a change in the failure pattern of the system with unknown probability q( =1 - p), 0 < p < 1. Conceptualising a mixture of hazard-rate or failure-rate patterns representing a realistic situation, the failure time distribution is given in the corresponding case. Classical and Bayesian estimation of the parameters and reliability characteristics of this failure time distribution is the subject matter of the present study.

  6. Penetrometry and estimation of the flow rate of powder excipients.

    Science.gov (United States)

    Zatloukal, Z; Sklubalová, Z

    2007-03-01

    In this work, penetrometry with a sphere was employed to study the flow properties of non-consolidated pharmaceutical powder excipients: sodium chloride, sodium citrate, boric acid, and sorbitol. In order to estimate flow rate, the pressure of penetration in Pascals was used. Penetrometry measurement with a sphere requires modification of the measurement container, in particular by decreasing the diameter of the container, to prevent undesirable movement of material in a direction opposite to that in which the sphere penetrates. Thus penetrometry by a sphere seems to be similar to indentation by the Brinell hardness tester. The pressure of penetration was determined from the depth of penetration by analogy with the Brinell hardness number and an equation for the inter conversion of the two variables is presented. The penetration pressure allowed direct estimation of the flow rate only for those powder excipients with a size fraction in the range of 0.250-0.630 mm. Using the ratio of penetration pressure to bulk density, a polynomial quadratic equation was generated from which the flow rates for the group of all tested powders could be estimated. Finally, if the inverse ratio of bulk density and penetration pressure was used as an independent variable, the flow rate could be estimated by linear regression with the coefficient of determination r2 = 0.9941. In conclusion, using sphere penetrometry, the flow properties of non-consolidated powder samples could be investigated by indentation. As a result, a linear regression in which the flow rate was directly proportional to the powder bulk density and inversely proportional to the penetration pressure could be best recommended for the estimation of the flow rate of powder excipients.

  7. Haldane and the first estimates of the human mutation rate

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 83; Issue 3. Haldane and the first estimates of the human mutation rate. Michael W. Nachman. Commentary on J. Genet. Classic Volume 83 Issue 3 December 2004 pp 231-233. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Estimates of outcrossing rates in Moringa oleifera using Amplified ...

    African Journals Online (AJOL)

    The mating system in plant populations is influenced by genetic and environmental factors. Proper estimates of the outcrosing rates are often required for planning breeding programmes, conservation and management of tropical trees. However, although Moringa oleifera is adapted to a mixed mating system, the proportion ...

  9. Lidar method to estimate emission rates from extended sources

    Science.gov (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  10. Estimated Glomerular Filtration Rate and Risk of Survival in Acute ...

    African Journals Online (AJOL)

    Objective: To assess the risk of survival in acute stroke using the MDRD equation derived estimated glomerular filtration rate. Design: A prospective observational cross-sectional study. Setting: Medical wards of a tertiary care hospital. Subjects: Eighty three acute stroke patients had GFR calculated within 48 hours of ...

  11. A Maximum Information Rate Quaternion Filter for Spacecraft Attitude Estimation

    NARCIS (Netherlands)

    Reijneveld, J.; Maas, A.; Choukroun, D.; Kuiper, J.M.

    2011-01-01

    Building on previous works, this paper introduces a novel continuous-time stochastic optimal linear quaternion estimator under the assumptions of rate gyro measurements and of vector observations of the attitude. A quaternion observation model, which observation matrix is rank degenerate, is reduced

  12. Angular-Rate Estimation Using Delayed Quaternion Measurements

    Science.gov (United States)

    Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.

  13. Haldane and the first estimates of the human mutation rate

    Indian Academy of Sciences (India)

    Unknown

    quency of an allele could be measured and if the strength of selection could be estimated, it should be possible to calculate the mutation rate. Since most mutations in .... no formal training in biology (Provine 1971, p. 168). Among his many contributions to the field was the fact that he edited this journal for quite a few years; ...

  14. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  15. Estimating infectivity rates and attack windows for two viruses.

    Science.gov (United States)

    Zhang, J; Noe, D A; Wu, J; Bailer, A J; Wright, S E

    2012-12-01

    Cells exist in an environment in which they are simultaneously exposed to a number of viral challenges. In some cases, infection by one virus may preclude infection by other viruses. Under the assumption of independent times until infection by two viruses, a procedure is presented to estimate the infectivity rates along with the time window during which a cell might be susceptible to infection by multiple viruses. A test for equal infectivity rates is proposed and interval estimates of parameters are derived. Additional hypothesis tests of potential interest are also presented. The operating characteristics of these tests and the estimation procedure are explored in simulation studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Estimation of Eruption Source Parameters from Plume Growth Rate

    Science.gov (United States)

    Pouget, Solene; Bursik, Marcus; Webley, Peter; Dehn, Jon; Pavalonis, Michael; Singh, Tarunraj; Singla, Puneet; Patra, Abani; Pitman, Bruce; Stefanescu, Ramona; Madankan, Reza; Morton, Donald; Jones, Matthew

    2013-04-01

    The eruption of Eyjafjallajokull, Iceland in April and May, 2010, brought to light the hazards of airborne volcanic ash and the importance of Volcanic Ash Transport and Dispersion models (VATD) to estimate the concentration of ash with time. These models require Eruption Source Parameters (ESP) as input, which typically include information about the plume height, the mass eruption rate, the duration of the eruption and the particle size distribution. However much of the time these ESP are unknown or poorly known a priori. We show that the mass eruption rate can be estimated from the downwind plume or umbrella cloud growth rate. A simple version of the continuity equation can be applied to the growth of either an umbrella cloud or the downwind plume. The continuity equation coupled with the momentum equation using only inertial and gravitational terms provides another model. Numerical modeling or scaling relationships can be used, as necessary, to provide values for unknown or unavailable parameters. Use of these models applied to data on plume geometry provided by satellite imagery allows for direct estimation of plume volumetric and mass growth with time. To test our methodology, we compared our results with five well-studied and well-characterized historical eruptions: Mount St. Helens, 1980; Pinatubo, 1991, Redoubt, 1990; Hekla, 2000 and Eyjafjallajokull, 2010. These tests show that the methodologies yield results comparable to or better than currently accepted methodologies of ESP estimation. We then applied the methodology to umbrella clouds produced by the eruptions of Okmok, 12 July 2008, and Sarychev Peak, 12 June 2009, and to the downwind plume produced by the eruptions of Hekla, 2000; Kliuchevsko'i, 1 October 1994; Kasatochi 7-8 August 2008 and Bezymianny, 1 September 2012. The new methods allow a fast, remote assessment of the mass eruption rate, even for remote volcanoes. They thus provide an additional path to estimation of the ESP and the forecasting

  17. Estimation of the rate of volcanism on Venus from reaction rate measurements

    Science.gov (United States)

    Fegley, Bruce, Jr.; Prinn, Ronald G.

    1989-01-01

    Laboratory rate data for the reaction between SO2 and calcite to form anhydrite are presented. If this reaction rate represents the SO2 reaction rate on Venus, then all SO2 in the Venusian atmosphere will disappear in 1.9 Myr unless volcanism replenishes the lost SO2. The required volcanism rate, which depends on the sulfur content of the erupted material, is in the range 0.4-11 cu km of magma erupted per year. The Venus surface composition at the Venera 13, 14, and Vega 2 landing sites implies a volcanism rate of about 1 cu km/yr. This geochemically estimated rate can be used to determine if either (or neither) of two discordant geophysically estimated rates is correct. It also suggests that Venus may be less volcanically active than the earth.

  18. Induced abortion in Tehran, Iran: estimated rates and correlates.

    Science.gov (United States)

    Erfani, Amir

    2011-09-01

    Abortion is severely restricted in Iran, and many women with an unwanted pregnancy resort to clandes-tine, unsafe abortions. Accurate information on abortion incidence is needed to assess the extent to which women ?experience unwanted pregnancies and to allocate resources for contraceptive services. Data for analysis came from 2,934 married women aged 15-49 who completed the 2009 Tehran Survey of Fertility. Estimated abortion rates and proportions of known pregnancies that end in abortion were calculated for all women and for demographic and socioeconomic subgroups, and descriptive data were used to examine women's contraceptive use and reasons for having an abortion. Annually, married women in Tehran have about 11,500 abortions. In the year before the survey, the estimated total abortion rate was 0.16 abortions per woman, and the annual general abortion rate was 5.5 abortions per 1,000 women; the general abortion rate peaked at 11.7 abortions among those aged 30-34. An estimated 8.7 of every 100 known pregnancies ended in abortion. The abortion rate was elevated among women who were employed or had high levels of income or education, as well as among those who reported a low level of religiosity, had two children or wanted no more. Fertility-related and socioeconomic reasons were cited by seven in 10 women who obtained an abortion. More than two-thirds of pregnancies that were terminated resulted from method failures among women who had used withdrawal, the pill or a condom. Estimated abortion rates and their correlates can help policymakers and program planners identify subgroups of women who are in particular need of services and counseling to prevent unwanted pregnancy.

  19. Diabetic nephropathy: glomerular filtration rate and estimated creatinine clearance

    OpenAIRE

    Guimarães, J; Bastos, M.; Melo, M.; Carvalheiro, M.

    2007-01-01

    OBJECTIVE: To assess in diabetic nephropathy, the accuracy of estimated creatinine clearance (calculated with the Cockroft Gault formula) and the clearance of the Tc99m-DTPA, to measure the glomerular filtration rate (GFR). PATIENTS AND METHODS: We analysed the GFR measure by Tc99m-DTPA method and the estimated by the Cockroft Gault formula, in 21 subjects with type 1 or type 2 diabetes. RESULTS: There was a strong positive correlation between the two methods but the Cockroft Gault formula un...

  20. Application of a conversion factor to estimate the adenoma detection rate from the polyp detection rate.

    LENUS (Irish Health Repository)

    Francis, Dawn L

    2011-03-01

    The adenoma detection rate (ADR) is a quality benchmark for colonoscopy. Many practices find it difficult to determine the ADR because it requires a combination of endoscopic and histologic findings. It may be possible to apply a conversion factor to estimate the ADR from the polyp detection rate (PDR).

  1. Rate of Penetration Optimization using Moving Horizon Estimation

    Directory of Open Access Journals (Sweden)

    Dan Sui

    2016-07-01

    Full Text Available Increase of drilling safety and reduction of drilling operation costs, especially improvement of drilling efficiency, are two important considerations in the oil and gas industry. The rate of penetration (ROP, alternatively called as drilling speed is a critical drilling parameter to evaluate and improve drilling safety and efficiency. ROP estimation has an important role in drilling optimization as well as interpretation of all stages of the well life cycle. In this paper, we use a moving horizon estimation (MHE method to estimate ROP as well as other drilling parameters. In the MHE formulation the states are estimated by a forward simulation with a pre-estimating observer. Moreover, it considers the constraints of states/outputs in the MHE problem. It is shown that the estimation error is with input-to-state stability. Furthermore, the ROP optimization (to achieve minimum drilling cost/drilling energy concerning with the efficient hole cleaning condition and downhole environmental stability is presented. The performance of the methodology is demonstrated by one case study.

  2. Formation dynamics of subsurface hydrocarbon intrusions following the Deepwater Horizon blowout

    Science.gov (United States)

    Socolofsky, Scott A.; Adams, E. Eric; Sherwood, Christopher R.

    2011-01-01

    Hydrocarbons released following the Deepwater Horizon (DH) blowout were found in deep, subsurface horizontal intrusions, yet there has been little discussion about how these intrusions formed. We have combined measured (or estimated) observations from the DH release with empirical relationships developed from previous lab experiments to identify the mechanisms responsible for intrusion formation and to characterize the DH plume. Results indicate that the intrusions originate from a stratification-dominated multiphase plume characterized by multiple subsurface intrusions containing dissolved gas and oil along with small droplets of liquid oil. Unlike earlier lab measurements, where the potential density in ambient water decreased linearly with elevation, at the DH site it varied quadratically. We have modified our method for estimating intrusion elevation under these conditions and the resulting estimates agree with observations that the majority of the hydrocarbons were found between 800 and 1200 m.

  3. Glomerular filtration rate in cows estimated by a prediction formula.

    Science.gov (United States)

    Murayama, Isao; Miyano, Anna; Sato, Tsubasa; Iwama, Ryosuke; Satoh, Hiroshi; Ichijyo, Toshihiro; Sato, Shigeru; Furuhama, Kazuhisa

    2014-12-01

    To testify the relevance of Jacobsson's equation for estimating bovine glomerular filtration rate (GFR), we prepared an integrated formula based on its equation using clinically healthy dairy (n=99) and beef (n=63) cows, and cows with reduced renal function (n=15). The isotonic, nonionic, contrast medium iodixanol was utilized as a test tracer. The GFR values estimated from the integrated formula were well consistent with those from the standard multisample method in each cow strain, and the Holstein equation prepared by a single blood sample in Holstein dairy cows. The basal reference GFR value in healthy dairy cows was significantly higher than that in healthy beef cows, presumably due to a breed difference or physiological state difference. It is concluded that the validity for the application of Jacobsson's equation to estimate bovine GFR is proven and it can be used in bovine practices. © 2014 Japanese Society of Animal Science.

  4. Fast Rate Estimation for RDO Mode Decision in HEVC

    Directory of Open Access Journals (Sweden)

    Maxim P. Sharabayko

    2014-12-01

    Full Text Available The latter-day H.265/HEVC video compression standard is able to provide two-times higher compression efficiency compared to the current industrial standard, H.264/AVC. However, coding complexity also increased. The main bottleneck of the compression process is the rate-distortion optimization (RDO stage, as it involves numerous sequential syntax-based binary arithmetic coding (SBAC loops. In this paper, we present an entropy-based RDO estimation technique for H.265/HEVC compression, instead of the common approach based on the SBAC. Our RDO implementation reduces RDO complexity, providing an average bit rate overhead of 1.54%. At the same time, elimination of the SBAC from the RDO estimation reduces block interdependencies, thus providing an opportunity for the development of the compression system with parallel processing of multiple blocks of a video frame.

  5. Abundance Estimation of Hyperspectral Data with Low Compressive Sampling Rate

    Science.gov (United States)

    Wang, Zhongliang; Feng, Yan

    2017-12-01

    Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation, and I/O throughputs. In this paper, a compressive sensing framework with low sampling rate is described for hyperspectral imagery. It is based on the widely used linear spectral mixture model. Abundance fractions can be calculated directly from compressively sensed data with no need to reconstruct original hyperspectral imagery. The proposed abundance estimation model is based on the sparsity of abundance fractions and an alternating direction method of multipliers is developed to solve this model. Experiments show that the proposed scheme has a high potential to unmix compressively sensed hyperspectral data with low sampling rate.

  6. The estimation of recombination rates from population genetic data

    OpenAIRE

    2007-01-01

    Genetic recombination is an important process that generates new combinations of genes on which natural selection can operate. As such, an understanding of recombination in the human genome will provide insight into the evolutionary processes that have shaped our genetic history. The aim of this thesis is to use samples of population genetic data to explore the patterns of variation in the rate of recombination in the human genome. To do this I introduce a novel means of estimating recombinat...

  7. Estimating Effective Subsidy Rates of Student Aid Programs

    OpenAIRE

    Stacey H. CHEN

    2008-01-01

    Every year millions of high school students and their parents in the US are asked to fill out complicated financial aid application forms. However, few studies have estimated the responsiveness of government financial aid schemes to changes in financial needs of the students. This paper identifies the effective subsidy rate (ESR) of student aid, as defined by the coefficient of financial needs in the regression of financial aid. The ESR measures the proportion of subsidy of student aid under ...

  8. Phylogenetic estimates of diversification rate are affected by molecular rate variation.

    Science.gov (United States)

    Duchêne, D A; Hua, X; Bromham, L

    2017-10-01

    Molecular phylogenies are increasingly being used to investigate the patterns and mechanisms of macroevolution. In particular, node heights in a phylogeny can be used to detect changes in rates of diversification over time. Such analyses rest on the assumption that node heights in a phylogeny represent the timing of diversification events, which in turn rests on the assumption that evolutionary time can be accurately predicted from DNA sequence divergence. But there are many influences on the rate of molecular evolution, which might also influence node heights in molecular phylogenies, and thus affect estimates of diversification rate. In particular, a growing number of studies have revealed an association between the net diversification rate estimated from phylogenies and the rate of molecular evolution. Such an association might, by influencing the relative position of node heights, systematically bias estimates of diversification time. We simulated the evolution of DNA sequences under several scenarios where rates of diversification and molecular evolution vary through time, including models where diversification and molecular evolutionary rates are linked. We show that commonly used methods, including metric-based, likelihood and Bayesian approaches, can have a low power to identify changes in diversification rate when molecular substitution rates vary. Furthermore, the association between the rates of speciation and molecular evolution rate can cause the signature of a slowdown or speedup in speciation rates to be lost or misidentified. These results suggest that the multiple sources of variation in molecular evolutionary rates need to be considered when inferring macroevolutionary processes from phylogenies. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  9. Redefinition and global estimation of basal ecosystem respiration rate

    DEFF Research Database (Denmark)

    Yuan, Wenping; Luo, Yiqi; Li, Xianglan

    2011-01-01

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still...... use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data...... use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration...

  10. Ancient hyaenas highlight the old problem of estimating evolutionary rates.

    Science.gov (United States)

    Shapiro, Beth; Ho, Simon Y W

    2014-02-01

    Phylogenetic analyses of ancient DNA data can provide a timeline for evolutionary change even in the absence of fossils. The power to infer the evolutionary rate is, however, highly dependent on the number and age of samples, the information content of the sequence data and the demographic history of the sampled population. In this issue of Molecular Ecology, Sheng et al. (2014) analysed mitochondrial DNA sequences isolated from a combination of ancient and present-day hyaenas, including three Pleistocene samples from China. Using an evolutionary rate inferred from the ages of the ancient sequences, they recalibrated the timing of hyaena diversification and suggest a much more recent evolutionary history than was believed previously. Their results highlight the importance of accurately estimating the evolutionary rate when inferring timescales of geographical and evolutionary diversification. © 2013 John Wiley & Sons Ltd.

  11. Estimating hydraulic properties of volcanic aquifers using constant-rate and variable-rate aquifer tests

    Science.gov (United States)

    Rotzoll, K.; El-Kadi, A. I.; Gingerich, S.B.

    2007-01-01

    In recent years the ground-water demand of the population of the island of Maui, Hawaii, has significantly increased. To ensure prudent management of the ground-water resources, an improved understanding of ground-water flow systems is needed. At present, large-scale estimations of aquifer properties are lacking for Maui. Seven analytical methods using constant-rate and variable-rate withdrawals for single wells provide an estimate of hydraulic conductivity and transmissivity for 103 wells in central Maui. Methods based on constant-rate tests, although not widely used on Maui, offer reasonable estimates. Step-drawdown tests, which are more abundantly used than other tests, provide similar estimates as constant-rate tests. A numerical model validates the suitability of analytical solutions for step-drawdown tests and additionally provides an estimate of storage parameters. The results show that hydraulic conductivity is log-normally distributed and that for dike-free volcanic rocks it ranges over several orders of magnitude from 1 to 2,500 m/d. The arithmetic mean, geometric mean, and median values of hydraulic conductivity are respectively 520, 280, and 370 m/d for basalt and 80, 50, and 30 m/d for sediment. A geostatistical approach using ordinary kriging yields a prediction of hydraulic conductivity on a larger scale. Overall, the results are in agreement with values published for other Hawaiian islands. ?? 2007 American Water Resources Association.

  12. Redefinition and global estimation of basal ecosystem respiration rate

    Science.gov (United States)

    Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.

    2011-01-01

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.

  13. Can we estimate bacterial growth rates from ribosomal RNA content?

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, P.F.

    1995-12-31

    Several studies have demonstrated a strong relationship between the quantity of RNA in bacterial cells and their growth rate under laboratory conditions. It may be possible to use this relationship to provide information on the activity of natural bacterial communities, and in particular on growth rate. However, if this approach is to provide reliably interpretable information, the relationship between RNA content and growth rate must be well-understood. In particular, a requisite of such applications is that the relationship must be universal among bacteria, or alternately that the relationship can be determined and measured for specific bacterial taxa. The RNA-growth rate relationship has not been used to evaluate bacterial growth in field studies, although RNA content has been measured in single cells and in bulk extracts of field samples taken from coastal environments. These measurements have been treated as probable indicators of bacterial activity, but have not yet been interpreted as estimators of growth rate. The primary obstacle to such interpretations is a lack of information on biological and environmental factors that affect the RNA-growth rate relationship. In this paper, the available data on the RNA-growth rate relationship in bacteria will be reviewed, including hypotheses regarding the regulation of RNA synthesis and degradation as a function of growth rate and environmental factors; i.e. the basic mechanisms for maintaining RNA content in proportion to growth rate. An assessment of the published laboratory and field data, the current status of this research area, and some of the remaining questions will be presented.

  14. Estimation of evapotranspiration rate in irrigated lands using stable isotopes

    Science.gov (United States)

    Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg

    2013-04-01

    Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.

  15. Estimation of blood flow rates in large microvascular networks.

    Science.gov (United States)

    Fry, Brendan C; Lee, Jack; Smith, Nicolas P; Secomb, Timothy W

    2012-08-01

    Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data, and provides a basis for deducing functional properties of microvessel networks. © 2012 John Wiley & Sons Ltd.

  16. Calculating life tables by estimating Chiang's a from observed rates.

    Science.gov (United States)

    Schoen, R

    1978-11-01

    A simple, accurate method of life table construction is advanced based upon a new way to estimate Chiang's nax (the average number of years lived in the x to x + n age interval by those dying in the interval). The estimate for nax leads immediately to an expression for lx+n (the survivors to age x + n) in terms of lx and the known mortality rates for the interval x to x+n and the two adjacent intervals. The complete solution for the basic life table is given. The proposed method and five other easily applied methods are then compared against the standard provided by the U.S. life tables for 1969-1971. The results attest to the excellent performance and high degree of accuracy of the proposed method. Finally, extensions of the method to multiple decrement and associated single decrement life tables are briefly described.

  17. Blowout Fracture after Descemet's Stripping Automated Endothelial Keratoplasty

    Directory of Open Access Journals (Sweden)

    Eri Tachibana

    2014-11-01

    Full Text Available We present the case of an 86-year-old woman who developed a blowout fracture after Descemet's stripping automated endothelial keratoplasty (DSAEK. Sixteen months after DSAEK, she suffered a blow to her left eye caused by a fall. Computed tomography confirmed the presence of a blowout fracture of the inferior wall of the left orbit with soft tissue prolapsing into the orbit. The patient complained of no abnormal symptoms, and her operated cornea was intact and clear. There was no abnormal finding in both the anterior and posterior segments. This case highlights that the DSAEK technique provides adequate tectonic stability of the globe throughout the traumatic event in contrast to penetrating keratoplasty, which can lead to devastating vision damage after trauma.

  18. Use of hazardous event frequency to evaluate safety integrity level of subsea blowout preventer

    Directory of Open Access Journals (Sweden)

    Soyeon Chung

    2016-05-01

    Full Text Available Generally, the Safety Integrity Level (SIL of a subsea Blowout Preventer (BOP is evaluated by determining the Probability of Failure on Demand (PFD, a low demand mode evaluation indicator. However, some SIL results are above the PFD's effective area despite the subsea BOP's demand rate being within the PFD's effective range. Determining a Hazardous Event Frequency (HEF that can cover all demand rates could be useful when establishing the effective BOP SIL. This study focused on subsea BOP functions that follow guideline 070 of the Norwegian Oil and Gas. Events that control subsea well kicks are defined. The HEF of each BOP function is analyzed and compared with the PFD by investigating the frequency for each event and the demand rate for the components. In addition, risk control options related to PFD and HEF improvements are compared, and the effectiveness of HEF as a SIL verification for subsea BOP is assessed.

  19. Management of diplopia in patients with blowout fractures

    Directory of Open Access Journals (Sweden)

    Osman Melih Ceylan

    2011-01-01

    Full Text Available Purpose: To report the management outcomes of diplopia in patients with blowout fracture. Materials and Methods: Data for 39 patients with diplopia due to orbital blowout fracture were analyzed retrospectively. The inferior wall alone was involved in 22 (56.4% patients, medial wall alone was involved in 14 (35.8% patients, and the medial and inferior walls were involved in three (7.6% patients. Each fracture was reconstructed with a Medpore® implant. Strabismus surgery or prism correction was performed in required patients for the management of persistent diplopia. Mean postoperative follow up was 6.5 months. Results: Twenty-three (58.9% patients with diplopia underwent surgical repair of blowout fracture. Diplopia was eliminated in 17 (73.9% patients following orbital wall surgery. Of the 23 patients, three (7.6% patients required prism glasses and another three (7.6% patients required strabismus surgery for persistent diplopia. In four (10.2% patients, strabismus surgery was performed without fracture repair. Twelve patients (30.7% with negative forced duction test results were followed up without surgery. Conclusions: In our study, diplopia resolved in 30.7% of patients without surgery and 69.2% of patients with diplopia required surgical intervention. Primary gaze diplopia was eliminated in 73.9% of patients through orbital wall repair. The most frequently employed secondary surgery was adjustable inferior rectus recession and <17.8% of patients required additional strabismus surgery.

  20. Dune field reactivation from blowouts: Sevier Desert, UT, USA

    Science.gov (United States)

    Barchyn, Thomas E.; Hugenholtz, Chris H.

    2013-12-01

    Dune field reactivation (a shift from vegetated to unvegetated state) has important economic, social, and environmental implications. In some settings reactivation is desired to preserve environmental values, but in arid regions reactivation is typically a form of land degradation. Little is known about reactivation due to a lack of published records, making modeling and prediction difficult. Here we detail dune reactivations from blowout expansion in the Sevier Desert, Utah, USA. We use historical aerial photographs and satellite imagery to track the transition from stable, vegetated dunes to actively migrating sediment in 3 locations. We outline a reactivation sequence: (i) disturbance breaches vegetation and exposes sediment, then (ii) creates a blowout with a deposition apron that (iii) advances downwind with a slipface or as a sand sheet. Most deposition aprons are not colonized by vegetation and are actively migrating. To explore causes we examine local sand flux, climate data, and stream flow. Based on available data the best explanation we can provide is that some combination of anthropogenic disturbance and climate may be responsible for the reactivations. Together, these examples provide a rare glimpse of dune field reactivation from blowouts, revealing the timescales, behaviour, and morphodynamics of devegetating dune fields.

  1. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  2. Commercial Discount Rate Estimation for Efficiency Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-04-13

    Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).

  3. Automatic estimation of pressure-dependent rate coefficients

    KAUST Repository

    Allen, Joshua W.

    2012-01-01

    A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation. This journal is © the Owner Societies.

  4. [Estimating glomerular filtration rate based on serum cystatin C].

    Science.gov (United States)

    Lü, Rui-Xue; Li, Yi-Song; Huang, Heng-Jian; Peng, Zhi-Ying; Ying, Bin-Wu; An, Zhen-Mei

    2012-01-01

    To develop an estimating formula for glomerular filtration Rate (GFR) based on serum cystatin C in patients with chronic kidney disease (CKD). Clinical characteristics of 242 CKD patients were collected. The patients were randomly divided into modeling group and model validation group. The rGFR obtained from 99mTc-DTPA clearance rate was used as a reference value of GFR. s-cystatin C was detected by latex enhanced immunoturbidimetric method. Preliminary linear regression analysis followed by multiple linear regression were performed to investigate the association between s-cystatin C and rGFR. The validity of the estimation formula was tested in the model validation group in comparison with Hoek formula and Orebro formula. With standardised countdown conversion, s-cystatin showed linear correlation with rGFR, with a correlation coefficient of 0.773. The multiple correlation coefficient, determination coefficient, adjusted R square and std. error of the estimation model were 0.863, 0.745, 0.742, and 0.207, respectively. The residuals P-P probability plot analysis showed that the model residuals fitted into normal distribution with homogeneity of variance. Theeformula was: eGFR = 67/s-cystatin C +3. No significant difference was found between the distribution of eGFR and rGFR. Our formula had an accuracy of 30% and 50%, which were no less than those obtained from Hoek formula and Orebro formula. The new formula also had acceptable bias and high precision. The Bland-Altman analysis and ROC curve analysis showed good applicability of the new formula. The GFR prediction formula we established has a good prediction performance as comparised with other formulae, which could be used in measuring GFR in CKD patients.

  5. Diversity, disparity, and evolutionary rate estimation for unresolved Yule trees.

    Science.gov (United States)

    Crawford, Forrest W; Suchard, Marc A

    2013-05-01

    The branching structure of biological evolution confers statistical dependencies on phenotypic trait values in related organisms. For this reason, comparative macroevolutionary studies usually begin with an inferred phylogeny that describes the evolutionary relationships of the organisms of interest. The probability of the observed trait data can be computed by assuming a model for trait evolution, such as Brownian motion, over the branches of this fixed tree. However, the phylogenetic tree itself contributes statistical uncertainty to estimates of rates of phenotypic evolution, and many comparative evolutionary biologists regard the tree as a nuisance parameter. In this article, we present a framework for analytically integrating over unknown phylogenetic trees in comparative evolutionary studies by assuming that the tree arises from a continuous-time Markov branching model called the Yule process. To do this, we derive a closed-form expression for the distribution of phylogenetic diversity (PD), which is the sum of branch lengths connecting the species in a clade. We then present a generalization of PD which is equivalent to the expected trait disparity in a set of taxa whose evolutionary relationships are generated by a Yule process and whose traits evolve by Brownian motion. We find expressions for the distribution of expected trait disparity under a Yule tree. Given one or more observations of trait disparity in a clade, we perform fast likelihood-based estimation of the Brownian variance for unresolved clades. Our method does not require simulation or a fixed phylogenetic tree. We conclude with a brief example illustrating Brownian rate estimation for 12 families in the mammalian order Carnivora, in which the phylogenetic tree for each family is unresolved.

  6. Heart rate and estimated energy expenditure during ballroom dancing.

    Science.gov (United States)

    Blanksby, B A; Reidy, P W

    1988-01-01

    Ten competitive ballroom dance couples performed simulated competitive sequences of Modern and Latin American dance. Heart rate was telemetered during the dance sequences and related to direct measures of oxygen uptake and heart rate obtained while walking on a treadmill. Linear regression was employed to estimate gross and net energy expenditures of the dance sequences. A multivariate analysis of variance with repeated measures on the dance factor was applied to the data to test for interaction and main effects on the sex and dance factors. Overall mean heart rate values for the Modern dance sequence were 170 beats.min-1 and 173 beats.min-1 for males and females respectively. During the Latin American sequence mean overall heart rate for males was 168 beats.min-1 and 177 beats.min-1 for females. Predicted mean gross values of oxygen consumption for the males were 42.8 +/- 5.7 ml.kg-1 min-1 and 42.8 +/- 6.9 ml.kg-1 min-1 for the Modern and Latin American sequences respectively. Corresponding gross estimates of oxygen consumption for the females were 34.7 +/- 3.8 ml.kg-1 min-1 and 36.1 +/- 4.1 ml.kg-1 min-1. Males were estimated to expand 54.1 +/- 8.1 kJ.min-1 of energy during the Modern sequence and 54.0 +/- 9.6 kJ.min-1 during the Latin American sequence, while predicted energy expenditure for females was 34.7 +/- 3.8 kJ.min-1 and 36.1 +/- 4.1 kJ.min-1 for Modern and Latin American dance respectively. The results suggested that both males and females were dancing at greater than 80% of their maximum oxygen consumption. A significant difference between males and females was observed for predicted gross and net values of oxygen consumption (in L.min-1 and ml.kg-1 min-1). PMID:3167503

  7. 76 FR 49666 - Safety Zone; East Coast Drag Boat Bucksport Blowout Boat Race, Waccamaw River, Bucksport, SC

    Science.gov (United States)

    2011-08-11

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; East Coast Drag Boat Bucksport Blowout Boat... East Coast Drag Boat Bucksport Blowout in Bucksport, South Carolina. The East Coast Drag Boat Bucksport Blowout will consist of a series of high-speed boat races. The event is scheduled to take place on...

  8. Bioavailability of contaminants estimated from uptake rates into soil invertebrates.

    Science.gov (United States)

    van Straalen, N M; Donker, M H; Vijver, M G; van Gestel, C A M

    2005-08-01

    It is often argued that the concentration of a pollutant inside an organism is a good indicator of its bioavailability, however, we show that the rate of uptake, not the concentration itself, is the superior predictor. In a study on zinc accumulation and toxicity to isopods (Porcellio scaber) the dietary EC(50) for the effect on body growth was rather constant and reproducible, while the internal EC(50) varied depending on the accumulation history of the animals. From the data a critical value for zinc accumulation in P. scaber was estimated as 53 microg/g/wk. We review toxicokinetic models applicable to time-series measurements of concentrations in invertebrates. The initial slope of the uptake curve is proposed as an indicator of bioavailability. To apply the dynamic concept of bioavailability in risk assessment, a set of representative organisms should be chosen and standardized protocols developed for exposure assays by which suspect soils can be evaluated.

  9. Groundwater recharge rate and zone structure estimation using PSOLVER algorithm.

    Science.gov (United States)

    Ayvaz, M Tamer; Elçi, Alper

    2014-01-01

    The quantification of groundwater recharge is an important but challenging task in groundwater flow modeling because recharge varies spatially and temporally. The goal of this study is to present an innovative methodology to estimate groundwater recharge rates and zone structures for regional groundwater flow models. Here, the unknown recharge field is partitioned into a number of zones using Voronoi Tessellation (VT). The identified zone structure with the recharge rates is associated through a simulation-optimization model that couples MODFLOW-2000 and the hybrid PSOLVER optimization algorithm. Applicability of this procedure is tested on a previously developed groundwater flow model of the Tahtalı Watershed. Successive zone structure solutions are obtained in an additive manner and penalty functions are used in the procedure to obtain realistic and plausible solutions. One of these functions constrains the optimization by forcing the sum of recharge rates for the grid cells that coincide with the Tahtalı Watershed area to be equal to the areal recharge rate determined in the previous modeling by a separate precipitation-runoff model. As a result, a six-zone structure is selected as the best zone structure that represents the areal recharge distribution. Comparison to results of a previous model for the same study area reveals that the proposed procedure significantly improves model performance with respect to calibration statistics. The proposed identification procedure can be thought of as an effective way to determine the recharge zone structure for groundwater flow models, in particular for situations where tangible information about groundwater recharge distribution does not exist. © 2013, National Ground Water Association.

  10. Estimating paleoproductivity from organic carbon accumulation rates - problem or panacea

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, M.A.; Leinen, M.S.; Cwienk, D.

    1987-05-01

    Organic carbon accumulation rates (OCAR; g/cm/sup 2//ky), calculated as the product of the weight fraction of organic carbon (OC), dry bulk density (DBD; g/cc), and sedimentation rate (SR; cm/ky), are the most uniform method of expressing variations in OC flux and/or preservation in marine environments. Several factors influence OC preservation; these include rate of OC production (flux from surface waters or other sources), water depth (WD), rates of OC decomposition in the water column, and SR (i.e., residence time of OC at the sediment/water interface). Understanding the patterns and variations of paleoproductivity (PP) that resulted in accumulation of relatively OC-rich strata depends upon differentiating between factors that enhance OC preservation and OC flux. Assuming that OC preservation is not a function of dissolved oxygen (DO) at levels above about 0.5 ml/l, Mueller and Suess in 1979 related OCAR to PP in modern marine settings by correcting for the effects of OC preservation as a function of SR. Zahn et al in 1986 improved on the Mueller-Suess equation, using a more comprehensive data set and taking into account the progressive diminution of the OC flux during transit through the water column. They have tested the efficacy of these PP equations as applied to several independently derived data sets for which OCAR could be calculated for the last 12 ky in cores from a variety of depths and a range of productivity in the Pacific and Atlantic Ocean basins. They also attempted to construct an equation that best predicted PP by fitting all available data in their data set for the modern ocean using stepwise multiple linear regression analysis of OC, DBD, SR, and WD data to predict PP and by using several methods of estimating productivity at each locale. Inclusion of larger data sets than previously used reduced the ability of any equation to predict PP.

  11. Gambling disorder: estimated prevalence rates and risk factors in Macao.

    Science.gov (United States)

    Wu, Anise M S; Lai, Mark H C; Tong, Kwok-Kit

    2014-12-01

    An excessive, problematic gambling pattern has been regarded as a mental disorder in the Diagnostic and Statistical Manual for Mental Disorders (DSM) for more than 3 decades (American Psychiatric Association [APA], 1980). In this study, its latest prevalence in Macao (one of very few cities with legalized gambling in China and the Far East) was estimated with 2 major changes in the diagnostic criteria, suggested by the 5th edition of DSM (APA, 2013): (a) removing the "Illegal Act" criterion, and (b) lowering the threshold for diagnosis. A random, representative sample of 1,018 Macao residents was surveyed with a phone poll design in January 2013. After the 2 changes were adopted, the present study showed that the estimated prevalence rate of gambling disorder was 2.1% of the Macao adult population. Moreover, the present findings also provided empirical support to the application of these 2 recommended changes when assessing symptoms of gambling disorder among Chinese community adults. Personal risk factors of gambling disorder, namely being male, having low education, a preference for casino gambling, as well as high materialism, were identified.

  12. Estimation of mortality rates in stage-structured population

    CERN Document Server

    Wood, Simon N

    1991-01-01

    The stated aims of the Lecture Notes in Biomathematics allow for work that is "unfinished or tentative". This volume is offered in that spirit. The problem addressed is one of the classics of statistical ecology, the estimation of mortality rates from stage-frequency data, but in tackling it we found ourselves making use of ideas and techniques very different from those we expected to use, and in which we had no previous experience. Specifically we drifted towards consideration of some rather specific curve and surface fitting and smoothing techniques. We think we have made some progress (otherwise why publish?), but are acutely aware of the conceptual and statistical clumsiness of parts of the work. Readers with sufficient expertise to be offended should regard the monograph as a challenge to do better. The central theme in this book is a somewhat complex algorithm for mortality estimation (detailed at the end of Chapter 4). Because of its complexity, the job of implementing the method is intimidating. Any r...

  13. [Medpor plus titanic mesh implant in the repair of orbital blowout fractures].

    Science.gov (United States)

    Han, Xiao-hui; Zhang, Jia-yu; Cai, Jian-qiu; Shi, Ming-guang

    2011-05-10

    To study the efficacy of porous polyethylene (Medpor) plus titanic mesh sheets in the repair of orbital blowout fractures. A total of 20 patients underwent open surgical reduction with the combined usage of Medpor and titanic mesh. And they were followed up for average period of 14.5 months (range: 9 - 18). There is no infection or extrusion of medpor and titanic mesh in follow-up periods. There was no instance of decreased visual acuity at post-operation. And all cases of enophthalmos were corrected. The post-operative protrusion degree of both eyes was almost identical at less than 2 mm. The movement of eye balls was satisfactory in all directions. Diplopia disappeared in 18 cases with a cure rate of 90%, 1 case improved and 1 case persisted. Medpor plus titanic mesh implant is a safe and effective treatment in the repair of orbital blow out fractures.

  14. The predictive factors of diplopia and extraocular movement limitations in isolated pure blow-out fracture.

    Science.gov (United States)

    Kasaee, Abolfazl; Mirmohammadsadeghi, Arash; Kazemnezhad, Fatemeh; Eshraghi, Bahram; Akbari, Mohammad Reza

    2017-03-01

    To evaluate the predictive factors for development of diplopia and extraocular muscle movement (EOM) limitations in the patients with isolated pure blow-out fracture. One hundred thirty-two patients with isolated pure blow-out fracture were included. The diagnosis was done with computed tomography scan. Possible predictive factors were analyzed with logistic regression. The cases that underwent surgery were assigned in the surgical group, and other cases were assigned in the non-surgical group. Receiver operating characteristic (ROC) curve analysis was used in the surgical group to evaluate the power of time interval from trauma to the surgery to predict persistence of 6 months postoperative diplopia and EOM limitation. At the first visit, 45 of 60 cases (75%) in the surgical group and 15 of 72 cases (20.8%) in the nonsurgical group had diplopia. After 6 months follow-up, 7 cases (11.7%) in the surgical group and 1 case (1.4%) in the nonsurgical group had persistent diplopia. Type of fracture was significantly associated with first visit diplopia (P = 0.01) and EOM limitations (P = 0.06). In the surgical group, type of fracture (P = 0.02 for both) and time interval from trauma to the surgery (P = 0.006 and 0.004, respectively) were significantly associated with 1 month diplopia and EOM limitations. Only time interval from trauma to the surgery (P = 0.04) was significantly associated with 3 months EOM limitation. In the ROC curve analysis, if the surgery was done before 4.5 (sensitivity = 87.5% and specificity = 61.3%) and 7.5 (sensitivity = 87.5% and specificity = 66.9%) days, risk of 6 months postoperative diplopia and EOM limitation was reduced, respectively. In the early postoperative period, a higher rate of diplopia was observed in the patients with combined inferior and medial wall fractures and longer time intervals from trauma to the surgery. The best time for blow-out fracture surgery was within 4.5 days after the trauma.

  15. Complete inferior rectus muscle transection secondary to orbital blowout fracture.

    Science.gov (United States)

    Carrere, Jonathan M; Lewis, Kyle T

    2018-01-05

    Complete extraocular muscle transection is uncommon in the setting of blunt trauma. We report a case of a 53-year-old male that developed diplopia after hitting his face directly on a concrete slab after a fall. On examination, he had a right hypertropia with a complete infraduction deficit. A CT scan of the face showed an orbital floor blowout fracture with complete inferior rectus transection. On surgical exploration, the distal and proximal ends of the muscle were identified and sutured together, and the floor fracture was repaired. At his post-operative visits, the patient had a persistent infraduction deficit, but subjectively had improved diplopia.

  16. Estimating Examination Failure Rates and Reliability Prior to Administration.

    Science.gov (United States)

    McIntosh, Vergil M.

    Using estimates of item ease and item discrimination, procedures are provided for computing estimates of the reliability and percentage of failing scores for tests assembled from these items. Two assumptions are made: that the average item coefficient will be approximately equal to the average of the estimated coefficients and that the score…

  17. Dynamic Changes of Typical Blowouts Based on High-Resolution Data: A Case Study in Hulunbuir Sandy Land, China

    Directory of Open Access Journals (Sweden)

    Yi Yang

    2017-01-01

    Full Text Available Blowouts are an important ground indication of wind-sand activity in the Hulunbuir grassland. They include two basic geomorphologic units, erosion depression and sand deposition, and three typical morphological types: saucer type, trough type, and compound type. In this study, the dynamic changes of typical blowouts within the past decade were analyzed via multiperiod high-resolution remote sensing images. RTK was used to repeatedly measure the blowouts to obtain their high-precision 3D terrain data in 2010, 2011, and 2012. Short-term dynamic changes in 3D blowout morphology were carefully analyzed to discover the following. (1 From 2002 to 2012, the depressions of typical blowouts exhibited downwind extension and lateral expansion trends, as they continuously grew in size. Regarding the sand deposition zones, those of the saucer blowout grew continuously, while those of the trough and compound blowouts fluctuated between growth and contraction. (2 The erosion depression of saucer blowouts eroded downward and spread horizontally; that of trough blowouts first accumulated then eroded but also spread horizontally. The erosion depression of compound blowouts exhibited horizontal spreading accompanied with bottom accumulation. The sand deposition zones of all three types of blowouts exhibited decreasing length with increasing width and height.

  18. Estimated glomerular filtration rate in the nephrotic syndrome.

    Science.gov (United States)

    Hofstra, Julia M; Willems, Johannes L; Wetzels, Jack F M

    2011-02-01

    Plasma creatinine concentration and creatinine-based equations are most commonly used as markers of glomerular filtration rate (GFR). The abbreviated MDRD formula is considered the best available formula. Altered renal handling of creatinine, which may occur in the nephrotic syndrome, will invalidate creatinine-based formulas. We have evaluated the abbreviated MDRD formula in a large cohort of patients with proteinuria. Data on a cohort of patients with glomerular diseases were available from a large database. We have studied the relationship between estimated GFR (MDRD formula), and plasma cystatin C (CysC) and plasma beta-2-microglobulin (β2m) as markers of GFR. The final analysis included 142 patients (93 M/49 F), median age 48 years (±15), plasma creatinine 101 μmol/L (42-368), plasma albumin 28.0 g/L (10.0-47.0), proteinuria 6.4 g/day (0.03-37.9), eGFR-MDRD4 64 mL/min/1.73 m2 (15-165), β2m 3.43 mg/L (0.7-13.8) and CysC 1.14 mg/mL (0.56-4.00). As expected, we observed a hyperbolic relationship between eGFR and both β2m and CysC. In multivariable analysis, plasma albumin concentration proved to be the most important predictor of the relationship between eGFR and both CysC and β2m. In the presence of hypoalbuminaemia, eGFR was ~ 30-40% higher at equal levels of plasma CysC or β2m. Conclusions were similar when using the recently developed CKD-EPI formula. Plasma albumin concentration did not effect the relationship between eGFR estimated by the six-variable original MDRD formula and β2m. Our data point to discrepancies between eGFR using the six-variable MDRD formula and eGFR using the abbreviated MDRD formula as well as the CKD-EPI formula in patients with hypoalbuminaemia. One should be aware of possible limitations of creatinine-based eGFR formulas in patients with a nephrotic syndrome.

  19. Standardizing estimates of the Plasmodium falciparum parasite rate

    Directory of Open Access Journals (Sweden)

    Smith David L

    2007-09-01

    Full Text Available Abstract Background The Plasmodium falciparum parasite rate (PfPR is a commonly reported index of malaria transmission intensity. PfPR rises after birth to a plateau before declining in older children and adults. Studies of populations with different age ranges generally report average PfPR, so age is an important source of heterogeneity in reported PfPR data. This confounds simple comparisons of PfPR surveys conducted at different times or places. Methods Several algorithms for standardizing PfPR were developed using 21 studies that stratify in detail PfPR by age. An additional 121 studies were found that recorded PfPR from the same population over at least two different age ranges; these paired estimates were used to evaluate these algorithms. The best algorithm was judged to be the one that described most of the variance when converting the PfPR pairs from one age-range to another. Results The analysis suggests that the relationship between PfPR and age is predictable across the observed range of malaria endemicity. PfPR reaches a peak after about two years and remains fairly constant in older children until age ten before declining throughout adolescence and adulthood. The PfPR pairs were poorly correlated; using one to predict the other would explain only 5% of the total variance. By contrast, the PfPR predicted by the best algorithm explained 72% of the variance. Conclusion The PfPR in older children is useful for standardization because it has good biological, epidemiological and statistical properties. It is also historically consistent with the classical categories of hypoendemic, mesoendemic and hyperendemic malaria. This algorithm provides a reliable method for standardizing PfPR for the purposes of comparing studies and mapping malaria endemicity. The scripts for doing so are freely available to all.

  20. Estimating mental fatigue based on electroencephalogram and heart rate variability

    Science.gov (United States)

    Zhang, Chong; Yu, Xiaolin

    2010-01-01

    The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.

  1. Hazard rate estimation in nonparametric regression with censored data

    OpenAIRE

    VAN KEILEGOM, Ingrid; VERAVERBEKE, Noel

    2001-01-01

    Consider a regression model in which the responses are sub ject to random right censoring. In this model, Beran studied the nonparametric estimation of the conditional cumulative hazard function and the corresponding cumulative distribution function. The main idea is to use smoothing in the covariates. Here we study asymptotic properties of the corresponding hazard function estimator obtained by convolution smoothing of Beran’s cumulative hazard estimator. We establish asymptotic expressions ...

  2. Interaction of hydraulic and buckling mechanisms in blowout fractures.

    Science.gov (United States)

    Nagasao, Tomohisa; Miyamoto, Junpei; Jiang, Hua; Tamaki, Tamotsu; Kaneko, Tsuyoshi

    2010-04-01

    The etiology of blowout fractures is generally attributed to 2 mechanisms--increase in the pressure of the orbital contents (the hydraulic mechanism) and direct transmission of impacts on the orbital walls (the buckling mechanism). The present study aims to elucidate whether or not an interaction exists between these 2 mechanisms. We performed a simulation experiment using 10 Computer-Aided-Design skull models. We applied destructive energy to the orbits of the 10 models in 3 different ways. First, to simulate pure hydraulic mechanism, energy was applied solely on the internal walls of the orbit. Second, to simulate pure buckling mechanism, energy was applied solely on the inferior rim of the orbit. Third, to simulate the combined effect of the hydraulic and buckling mechanisms, energy was applied both on the internal wall of the orbit and inferior rim of the orbit. After applying the energy, we calculated the areas of the regions where fracture occurred in the models. Thereafter, we compared the areas among the 3 energy application patterns. When the hydraulic and buckling mechanisms work simultaneously, fracture occurs on wider areas of the orbital walls than when each of these mechanisms works separately. The hydraulic and buckling mechanisms interact, enhancing each other's effect. This information should be taken into consideration when we examine patients in whom blowout fracture is suspected.

  3. Light rare earth element depletion during Deepwater Horizon blowout methanotrophy.

    Science.gov (United States)

    Shiller, A M; Chan, E W; Joung, D J; Redmond, M C; Kessler, J D

    2017-09-04

    Rare earth elements have generally not been thought to have a biological role. However, recent work has demonstrated that the light REEs (LREEs: La, Ce, Pr, and Nd) are essential for at least some methanotrophs, being co-factors in the XoxF type of methanol dehydrogenase (MDH). We show here that dissolved LREEs were significantly removed in a submerged plume of methane-rich water during the Deepwater Horizon (DWH) well blowout. Furthermore, incubation experiments conducted with naturally methane-enriched waters from hydrocarbon seeps in the vicinity of the DWH wellhead also showed LREE removal concurrent with methane consumption. Metagenomic sequencing of incubation samples revealed that LREE-containing MDHs were present. Our field and laboratory observations provide further insight into the biochemical pathways of methanotrophy during the DWH blowout. Additionally, our results are the first observations of direct biological alteration of REE distributions in oceanic systems. In view of the ubiquity of LREE-containing MDHs in oceanic systems, our results suggest that biological uptake of LREEs is an overlooked aspect of the oceanic geochemistry of this group of elements previously thought to be biologically inactive and an unresolved factor in the flux of methane, a potent greenhouse gas, from the ocean.

  4. Fundamentals and the Equilibrium of Real Exchange Rate of an Emerging Economy: Estimating the Exchange Rate Misalignment in Malaysia

    National Research Council Canada - National Science Library

    Jauhari Dahalan; Mohammed Umar; Hussin Abdullah

    2016-01-01

    .... Based on the suggestion of the weak exogeneity and unit vector analysis, the study estimates the equilibrium and sustainable equilibrium real exchange rate based on the behavioural equilibrium exchange rate (BEER...

  5. Estimated glomerular filtration rate at initiation of hemodialysis in a ...

    African Journals Online (AJOL)

    Patients with acute kidney injury (AKI) or acute-on-chronic kidney disease were excluded. GFR was estimated using CKD-EPI formula. Early dialysis was defined as dialysing at an estimated GFR of >10ml/min. Results: A total of 78 patients initiated haemodialysis during the period of review. Mean age was 45±18 years ...

  6. Simple Repair of a Blow-Out Fracture by the Modified Caldwell-Luc Approach.

    Science.gov (United States)

    Park, Min Woo; Kim, Soung Min; Amponsah, Emmanuel Kofi; Lee, Suk Keun

    2015-06-01

    Here we report a patient with a blow-out fracture of the orbital floor that was treated by an intraoral transmaxillary approach. This 38-year-old man suffered a sudden blow to the periorbital area, which caused prolapse of the orbital contents into the maxillary sinus. The modified Caldwell-Luc approach was used to repair the orbital blow-out fracture and the maxillary sinus during was packed with Frazin gauze for 7 days to prevent recurrence of the prolapse. This was an easy and minimally invasive technique for the management of a blow-out fracture of the orbital floor.

  7. Estimates of rate of passage, intake and apparent digestibility of ...

    African Journals Online (AJOL)

    UPUSER

    using the n-alkane markers, gave good estimates of dry matter intake, e.g. for fresh ryegrass the measured intake was 8.86±0.23 kg and the estimated intakes from the C31:C32 ratio, 7.9±1.9 kg and from the C32:C33 ratio,. 8.3±1.4 kg. However, the effect of the higher recovery of the dosed marker needs further investigation.

  8. Estimation of alga growth stage and lipid content growth rate

    Science.gov (United States)

    Embaye, Tsegereda N. (Inventor); Trent, Jonathan D. (Inventor)

    2012-01-01

    Method and system for estimating a growth stage of an alga in an ambient fluid. Measured light beam absorption or reflection values through or from the alga and through an ambient fluid, in each of two or more wavelength sub-ranges, are compared with reference light beam absorption values for corresponding wavelength sub-ranges for in each alga growth stage to determine (1) which alga growth stage, if any, is more likely and (2) whether estimated lipid content of the alga is increasing or has peaked. Alga growth is preferably terminated when lipid content has approximately reached a maximum value.

  9. Redefinition and global estimation of basal ecosystem respiration rate

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Wenping [College of Global Change and Earth System Science, Beijing Normal University, Beijing, China; Luo, Yiqi [Department of Botany and Microbiology, University of Oklahoma, Norman, Oklahoma, USA; Li, Xianglan [College of Global Change and Earth System Science, Beijing Normal University, Beijing, China; Liu, Shuguang; Yu, Guirui [Key Laboratory of Ecosystem Network Observation and Modeling, Synthesis Research Center of Chinese Ecosystem Research Network, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, China; Zhou, Tao [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing, China; Bahn, Michael [Institute of Ecology, University of Innsbruck, Innsbruck, Austria; Black, Andy [Faculty of Land and Food Systems, University of British Columbia, Vancouver, B. C., Canada; Desai, Ankur R. [Atmospheric and Oceanic Sciences Department, Center for Climatic Research, Nelson Institute for Environmental Studies, University of Wisconsin-Madison, Madison, Wisconsin, USA; Cescatti, Alessandro [Institute for Environment and Sustainability, Joint Research Centre, European Commission, Ispra, Italy; Marcolla, Barbara [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Jacobs, Cor [Alterra, Earth System Science-Climate Change, Wageningen University, Wageningen, Netherlands; Chen, Jiquan [Department of Earth, Ecological, and Environmental Sciences, University of Toledo, Toledo, Ohio, USA; Aurela, Mika [Climate and Global Change Research, Finnish Meteorological Institute, Helsinki, Finland; Bernhofer, Christian [Chair of Meteorology, Institute of Hydrology and Meteorology, Technische Universität Dresden, Dresden, Germany; Gielen, Bert [Department of Biology, University of Antwerp, Wilrijk, Belgium; Bohrer, Gil [Department of Civil, Environmental, and Geodetic Engineering, Ohio State University, Columbus, Ohio, USA; Cook, David R. [Climate Research Section, Environmental Science Division, Argonne National Laboratory, Argonne, Illinois, USA; Dragoni, Danilo [Department of Geography, Indiana University, Bloomington, Indiana, USA; Dunn, Allison L. [Department of Physical and Earth Sciences, Worcester State College, Worcester, Massachusetts, USA; Gianelle, Damiano [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Grünwald, Thomas [Chair of Meteorology, Institute of Hydrology and Meteorology, Technische Universität Dresden, Dresden, Germany; Ibrom, Andreas [Risø DTU National Laboratory for Sustainable Energy, Biosystems Division, Technical University of Denmark, Roskilde, Denmark; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Griffin, Georgia, USA; Lindroth, Anders [Geobiosphere Science Centre, Physical Geography and Ecosystems Analysis, Lund University, Lund, Sweden; Liu, Heping [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman, Washington, USA; Marchesini, Luca Belelli [Department for Innovation in Biological, Agro-Food and Forest Systems, University of Tuscia, Viterbo, Italy; Montagnani, Leonardo; Pita, Gabriel [Department of Mechanical Engineering, Instituto Superior Técnico, Lisbon, Portugal; Rodeghiero, Mirco [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Rodrigues, Abel [Unidade de Silvicultura e Produtos Florestais, Instituto Nacional dos Recursos Biológicos, Oeiras, Portugal; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa, Alabama, USA; Stoy, Paul C. [Department of Land Resources and Environmental Sciences, Montana State University, Bozeman, Montana, USA

    2011-10-13

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ~3°S to ~70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual

  10. Estimation of Cessation Rates among Danish Users of Benzodiazepines

    DEFF Research Database (Denmark)

    Støvring, Henrik; Gasse, Christiane

    Background: Widespread and longterm use of benzodiazepines constitute a public health problem. Health care authorities hence advice that use should not exceed three months, in particular for the elderly and patients with a past diagnosis of drug addiction. Objectives: Estimate the shape of cessat...

  11. Convergence rates of Laplace-transform based estimators

    NARCIS (Netherlands)

    A.V. den Boer (Arnoud); M.R.H. Mandjes (Michel)

    2017-01-01

    textabstractThis paper considers the problem of estimating probabilities of the form ℙ(Y ≤ w), for a given value of w, in the situation that a sample of i.i.d. observations X1,..., Xn of X is available, and where we explicitly know a functional relation between the Laplace transforms of the

  12. Combining heart rate and accelerometer data to estimate physical fitness

    NARCIS (Netherlands)

    Tönis, Thijs; Vollenbroek-Hutten, Miriam Marie Rosé; Hermens, Hermanus J.

    2012-01-01

    Monitoring changes in physical fitness is relevant in many conditions and groups of patients, but its estimation demands substantial effort from the person, personnel and equipment. Besides that, present (sub) maximal exercise tests give a momentary fitness score, which depends on many (external)

  13. Bias-Variance Tradeoffs in Recombination Rate Estimation.

    Science.gov (United States)

    Stone, Eric A; Singh, Nadia D

    2016-02-01

    In 2013, we and coauthors published a paper characterizing rates of recombination within the 2.1-megabase garnet-scalloped (g-sd) region of the Drosophila melanogaster X chromosome. To extract the signal of recombination in our high-throughput sequence data, we adopted a nonparametric smoothing procedure, reducing variance at the cost of biasing individual recombination rates. In doing so, we sacrificed accuracy to gain precision-precision that allowed us to detect recombination rate heterogeneity. Negotiating the bias-variance tradeoff enabled us to resolve significant variation in the frequency of crossing over across the garnet-scalloped region. Copyright © 2016 by the Genetics Society of America.

  14. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  15. Loss Rate Estimation in Marine Corps Officer Manpower Models.

    Science.gov (United States)

    1985-09-01

    CLASSIFICATION OF THIS PAGE (lWhen Data Etoro REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM . REPORT NUMBER. GOVT ACCESSION NO. 3. RECIPIENT’S...119 F. LOSS AND RISK FUNCTIONS . 121 G. EVALUATION OF E(U/W) ... .. ... . 123 H. INVERSE SINE TRANSFCRM . . . . . . . . . . . 126 I. MONTHLY VERSES...Formulas and mathematical risk calculations relating to the James-Stein and empirical Bayes estimators. Cross validation procedures were utilized to

  16. Clinical use of estimated glomerular filtration rate for evaluation of kidney function

    DEFF Research Database (Denmark)

    Broberg, Bo; Lindhardt, Morten; Rossing, Peter

    2013-01-01

    Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...

  17. Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.

    Science.gov (United States)

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2017-07-01

    A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.

  18. Modeling the key factors that could influence the diffusion of CO2 from a wellbore blowout in the Ordos Basin, China.

    Science.gov (United States)

    Li, Qi; Shi, Hui; Yang, Duoxing; Wei, Xiaochen

    2017-02-01

    Carbon dioxide (CO2) blowout from a wellbore is regarded as a potential environment risk of a CO2 capture and storage (CCS) project. In this paper, an assumed blowout of a wellbore was examined for China's Shenhua CCS demonstration project. The significant factors that influenced the diffusion of CO2 were identified by using a response surface method with the Box-Behnken experiment design. The numerical simulations showed that the mass emission rate of CO2 from the source and the ambient wind speed have significant influence on the area of interest (the area of high CO2 concentration above 30,000 ppm). There is a strong positive correlation between the mass emission rate and the area of interest, but there is a strong negative correlation between the ambient wind speed and the area of interest. Several other variables have very little influence on the area of interest, e.g., the temperature of CO2, ambient temperature, relative humidity, and stability class values. Due to the weather conditions at the Shenhua CCS demonstration site at the time of the modeled CO2 blowout, the largest diffusion distance of CO2 in the downwind direction did not exceed 200 m along the centerline. When the ambient wind speed is in the range of 0.1-2.0 m/s and the mass emission rate is in the range of 60-120 kg/s, the range of the diffusion of CO2 is at the most dangerous level (i.e., almost all Grade Four marks in the risk matrix). Therefore, if the injection of CO2 takes place in a region that has relatively low perennial wind speed, special attention should be paid to the formulation of pre-planned, emergency measures in case there is a leakage accident. The proposed risk matrix that classifies and grades blowout risks can be used as a reference for the development of appropriate regulations. This work may offer some indicators in developing risk profiles and emergency responses for CO2 blowouts.

  19. Estimating the effects of Exchange and Interest Rates on Stock ...

    African Journals Online (AJOL)

    The interest rate also showed a negative relationship but insignificant at the chosen 5% level of significance. This study recommended that policy makers should put in place measures that will ensure a stable macroeconomic environment since an unstable macroeconomic environment can deter investors and make them ...

  20. Estimated glomerular filtration rate in the nephrotic syndrome

    NARCIS (Netherlands)

    Hofstra, J.M.; Willems, J.L.; Wetzels, J.F.M.

    2011-01-01

    BACKGROUND: Plasma creatinine concentration and creatinine-based equations are most commonly used as markers of glomerular filtration rate (GFR). The abbreviated MDRD formula is considered the best available formula. Altered renal handling of creatinine, which may occur in the nephrotic syndrome,

  1. Evaluation of methods for estimating the metabolic rate according to ...

    African Journals Online (AJOL)

    A study was made in North Eastern Zimbabwe during the hot season to evaluate ISO standard methods for heat stress risk determination in manual forestry work. The differences between the metabolic rate assessments by three methods of ISO 8996 were evaluated, as well as the effects of these variations in the ...

  2. Estimated migration rates under scenarios of global climate change.

    Science.gov (United States)

    Jay R. Malcolm; Adam Markham; Ronald P. Neilson; Michael. Oaraci

    2002-01-01

    Greefihouse-induced warming and resulting shifts in climatic zones may exceed the migration capabilities of some species. We used fourteen combinations of General Circulation Models (GCMs) and Global Vegetation Models (GVMs) to investigate possible migration rates required under CO2 doubled climatic forcing.

  3. [Estimation of glomerular filtration rate using weight/creatinine formula].

    Science.gov (United States)

    Fernández-Fresnedo, Gema; Martín de Francisco, Angel Luis; Rodrigo Calabia, Emilio; Ruiz San Millán, Juan Carlos; Sanz de Castro, Saturnino; Arias Rodríguez, Manuel

    2003-04-12

    The correct management of patients with chronic renal disease depends on an early diagnosis. The aim of this study was to evaluate the usefulness, in the daily clinical practice, of the weight/creatinine formula as an indirect measurement of glomerular filtration. 1,025 ambulatory patients were referred to the Nephrology Laboratory for basic blood and urine analysis. Creatinine clearance was calculated with the standard formula. A good correlation between the creatinine clearance adjusted for the corporal surface and that estimated by the weight/creatinine formula was observed, especially when creatinine levels were between 1.5-3 mg/dl and patients were older than 60 years. The mean difference between both methods was 6.3 (14.5) ml/min for males and 2.4 (10.5) ml/min for females. The weight/creatinine formula had a sensitivity of 91% and a specificity of 80% to detect a clearance below 50 ml/min. The weight/creatinine formula underestimates the clearance for normal creatinine values but fits quite well for creatinine levels between 1.5-3 mg/dl, mainly in patients older than 60 years. Although the estimation of clearance through this formula could be inaccurate, in most cases this is clinically irrelevant. Moreover, such a simple formula could avoid potential mistakes appearing at the time of evaluating renal function only by the serum creatinine.

  4. Rate of convergence of k-step Newton estimators to efficient likelihood estimators

    Science.gov (United States)

    Steve Verrill

    2007-01-01

    We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...

  5. Applying Hyperspectral Imaging to Heart Rate Estimation for Adaptive Automation

    Science.gov (United States)

    2013-03-01

    Walker, G. H., Baber , C., & Jenkins, D. P. (2005). Human Factors Methods: A Practical Guide for Engineering and Design. Burlington, VT: Ashgate...the training, subjects were asked if they were ready to begin the experiment or if they wanted any extra time to practice . Test Sessions and Testing...intrusive and not practical for day-to-day operations. Several imaging techniques have been developed to remotely measure heart rate; however, each

  6. Three-dimensional æolian dynamics within a bowl blowout during offshore winds: Greenwich Dunes, Prince Edward Island, Canada

    Science.gov (United States)

    Hesp, Patrick A.; Walker, Ian J.

    2012-01-01

    This paper examines the æolian dynamics of a deep bowl blowout within the foredune of the Greenwich Dunes, on the northeastern shore or Prince Edward Island, Canada. Masts of cup anemometers and sonic anemometers were utilized to measure flow velocities and directions during a strong regional ESE (offshore) wind event. The flow across the blowout immediately separated at the upwind rim crest, and within the blowout was strongly reversed. High, negative vertical flows occurred down the downwind (but seaward) vertical scarp which projected into the separation envelope and topographically forced flow back into the blowout. A pronounced, accelerated jet flow existed near the surface across the blowout basin, and the flow exhibited a complex, anti-clockwise structure with the near-surface flow following the contours around the blowout basin and lower slopes. Significant æolian sediment transport occurred across the whole bowl basin and sediment was delivered by saltation and suspension out the blowout to the east. This study demonstrates that strong offshore winds produce pronounced topographically forced flow steering, separation, reversal, and more complex three-dimensional motions within a bowl blowout, and that such winds within a bowl blowout play a notable role in transporting sediment within and beyond deep topographic hollows in the foredune.

  7. Prediction of glomerular filtration rate in cancer patients by an equation for Japanese estimated glomerular filtration rate.

    Science.gov (United States)

    Funakoshi, Yohei; Fujiwara, Yutaka; Kiyota, Naomi; Mukohara, Toru; Shimada, Takanobu; Toyoda, Masanori; Imamura, Yoshinori; Chayahara, Naoko; Umezu, Michio; Otsuki, Naoki; Nibu, Ken-ichi; Minami, Hironobu

    2013-03-01

    Assessment of renal function is important for safe cancer chemotherapy, and eligibility criteria for clinical trials often include creatinine clearance. However, creatinine clearance overestimates glomerular filtration rate, and various new formulae have been proposed to estimate glomerular filtration rate. Because these were developed mostly in patients with chronic kidney disease, we evaluated their validity in cancer patients without kidney disease. Glomerular filtration rate was measured by inulin clearance in 45 Japanese cancer patients, and compared with creatinine clearance measured by 24-h urine collection as well as that estimated by the Cockcroft-Gault formula, Japanese estimated glomerular filtration rate developed in chronic kidney disease patients, the Modification of Diet in Renal Disease study equation and the Chronic Kidney Disease Epidemiology Collaboration equation. The Modification of Diet in Renal Disease study and Chronic Kidney Disease Epidemiology Collaboration equations were adjusted for the Japanese population by multiplying by 0.808 and 0.813, respectively. The mean inulin clearance was 79.2 ± 18.7 ml/min/1.73 m(2). Bias values to estimate glomerular filtration rate for Japanese estimated glomerular filtration rate, the Cockcroft-Gault formula, creatinine clearance measured by 24-h urine collection, the 0.808 × Modification of Diet in Renal Disease study equation and the 0.813 × Chronic Kidney Disease Epidemiology Collaboration equation were 0.94, 9.75, 29.67, 5.26 and -0.92 ml/min/1.73 m(2), respectively. Precision (root-mean square error) was 14.7, 22.4, 39.8, 16.0 and 14.1 ml/min, respectively. Of the scatter plots of inulin clearance versus each estimation formula, the Japanese estimated glomerular filtration rate correlated most accurately with actual measured inulin clearance. The Japanese estimated glomerular filtration rate and the 0.813 × Chronic Kidney Disease Epidemiology Collaboration equation estimated glomerular

  8. Current methods for estimating the rate of photorespiration in leaves.

    Science.gov (United States)

    Busch, F A

    2013-07-01

    Photorespiration is a process that competes with photosynthesis, in which Rubisco oxygenates, instead of carboxylates, its substrate ribulose 1,5-bisphosphate. The photorespiratory metabolism associated with the recovery of 3-phosphoglycerate is energetically costly and results in the release of previously fixed CO2. The ability to quantify photorespiration is gaining importance as a tool to help improve plant productivity in order to meet the increasing global food demand. In recent years, substantial progress has been made in the methods used to measure photorespiration. Current techniques are able to measure multiple aspects of photorespiration at different points along the photorespiratory C2 cycle. Six different methods used to estimate photorespiration are reviewed, and their advantages and disadvantages discussed. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.

  9. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  10. A rapid method to estimate Westergren sedimentation rates

    Science.gov (United States)

    Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J.

    2009-01-01

    The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R2=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R2=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses. PMID:19791973

  11. Infrared imaging based hyperventilation monitoring through respiration rate estimation

    Science.gov (United States)

    Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna

    2016-07-01

    A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.

  12. Decision Tree Rating Scales for Workload Estimation: Theme and Variations

    Science.gov (United States)

    Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.

    1984-01-01

    The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.

  13. Gamma Kernel Estimators for Density and Hazard Rate of Right-Censored Data

    Directory of Open Access Journals (Sweden)

    T. Bouezmarni

    2011-01-01

    Full Text Available The nonparametric estimation for the density and hazard rate functions for right-censored data using the kernel smoothing techniques is considered. The “classical” fixed symmetric kernel type estimator of these functions performs well in the interior region, but it suffers from the problem of bias in the boundary region. Here, we propose new estimators based on the gamma kernels for the density and the hazard rate functions. The estimators are free of bias and achieve the optimal rate of convergence in terms of integrated mean squared error. The mean integrated squared error, the asymptotic normality, and the law of iterated logarithm are studied. A comparison of gamma estimators with the local linear estimator for the density function and with hazard rate estimator proposed by Müller and Wang (1994, which are free from boundary bias, is investigated by simulations.

  14. External validation of EPICON: a grouping system for estimating morbidity rates from electronic medical records.

    NARCIS (Netherlands)

    Biermans, M.C.J.; Elbers, G.H.; Verheij, R.A.; Veen, W.J. van der; Zielhuis, G.A.; Vries Robbé, P.F. de

    2008-01-01

    OBJECTIVE: To externally validate EPICON, a computerized system for grouping diagnoses from EMRs in general practice into episodes of care. These episodes can be used for estimating morbidity rates. DESIGN: Comparative observational study. MEASUREMENTS: Morbidity rates from an independent dataset,

  15. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  16. Effective Tax Rates in Macroeconomics: Cross-Country Estimates of Tax Rates on Factor Incomes and Consumption

    OpenAIRE

    Enrique G. Mendoza; Assaf Razin; Linda L. Tesar

    1994-01-01

    This paper proposes a method for computing tax rates using national accounts and revenue statistics. Using this method we construct time-series of tax rates for large industrial countries. The method identifies the revenue raised by different taxes at the general government level and defines aggregate measures of the corresponding tax bases. This method yields estimates of effective tax rates on factor incomes and consumption consistent with the tax distortions faced by a representative agent...

  17. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  18. Estimating spread rates of non-native species: the gypsy moth as a case study

    Science.gov (United States)

    Patrick Tobin; Andrew M. Liebhold; E. Anderson Roberts; Laura M. Blackburn

    2015-01-01

    Estimating rates of spread and generating projections of future range expansion for invasive alien species is a key process in the development of management guidelines and policy. Critical needs to estimate spread rates include the availability of surveys to characterize the spatial distribution of an invading species and the application of analytical methods to...

  19. Recognition and management of an orbital blowout fracture in an amateur boxer.

    Science.gov (United States)

    Karsteter, Page A; Yunker, Craig

    2006-08-01

    Case report. To identify key elements in the recognition and management of a patient with an orbital blowout fracture and make recommendations on diagnosis, treatment, referral, imaging, and return to sports. Orbital blowout fractures are uncommon but important injuries for physical therapists to recognize. Immediate management is essential in preventing complications. The mechanism of injury is a direct blow to the orbital rim or orbit. The patient reported to the athletic training room 15 minutes after completing a boxing match and reported that his left eye had suddenly inflated after blowing his nose. We suspected an orbital blowout fracture and referred him immediately to the emergency department where conventional radiographs were ordered. On follow-up the next day, after determining that the radiographs were normal, but still having a high index of suspicion for an orbital blowout fracture, we referred him to his primary care manager. The primary care manager ordered a computed tomography scan that revealed the fracture and referred the patient to ophthalmology. The patient was restricted from the remaining 4 weeks of the boxing season. He completed a rigorous Army physical fitness test 7 days postinjury and the Marine Corps Marathon 47 days postinjury. Orbital blowout fractures without double vision, extraocular muscle entrapment, or persistent numbness can be treated with time and protection. The patient can continue with normal fitness activities except contact or collision sports.

  20. A case of blowout fracture of the orbital floor in early childhood

    Directory of Open Access Journals (Sweden)

    Sugamata A

    2015-07-01

    Full Text Available Akira Sugamata, Naoki YoshizawaDepartment of Plastic and Reconstructive Surgery, Tokyo Medical University Hachioji Medical Center, Tokyo, JapanAbstract: There are few reports of blowout fractures of the orbital floor in children younger than 5 years of age; in a search of the literature, we found only six reported cases which revealed the exact age, correct diagnosis, and treatment. We herein report the case of a 3-year-old boy with a blowout fracture of the orbital floor. Computed tomography showed a pure blowout fracture of the left orbital floor with a slight dislocation of the orbital contents. The patient was treated conservatively due to the absence of abnormal limitation of eye movement or enophthalmos. The patient did not develop any complications that necessitated later surgical intervention. Computed tomography at 6 months after the injury showed the regeneration of the orbital floor in the area of the fracture and no abnormalities in the left maxillary sinus. We herein present our case and the details of six other cases reported in the literature, and discuss their etiology, diagnosis, and treatment methods.Keywords: blowout fracture, orbital fracture, pediatric blowout fracture

  1. Change of the orbital volume ratio in pure blow-out fractures depending on fracture location.

    Science.gov (United States)

    Oh, Sang Ah; Aum, Jae Ho; Kang, Dong Hee; Gu, Ja Hea

    2013-07-01

    The purposes of this study were to observe bony orbital volume (OV) changes in pure blow-out fractures according to fracture location using a facial computed tomographic scan and to investigate whether the OV measurements can be used as a quantitative value for the evaluation of the surgical results of the acute blow-out fracture.Forty-five patients with unilateral pure blow-out fracture were divided into 3 groups: inferior (group I), inferior medial (group IM), and medial (group M) orbital wall fracture. The OV and the orbital volume ratio (OVR) were prospectively measured before and 6 months after surgery with the use of 3-dimensional computed tomographic scans, and the Hertel scale was measured with a Hertel exothalmometer.The preoperative OVR increased to the greatest extent in group IM, and the mean preoperative OVR was 121.46. The mean preoperative OVR in group I was significantly higher than that of group M (P = 0.005). The OV and OVR revealed a statistically significant decrease after the surgery (P = 0.000). The Hertel scale improved from -1.04 mm before the surgery to -0.78 mm after the surgery, but no significant difference was observed (P = 0.051).The OVR was useful as a quantitative value to evaluate pure blow-out fractures, compared with that of the Hertel scale. Fracture location-associated OVR studies are needed to make volume guidelines of blow-out fracture surgery.

  2. Orbital Cellulitis Following Orbital Blow-out Fracture.

    Science.gov (United States)

    Byeon, Je Yeon; Choi, Hwan Jun

    2017-10-01

    Orbital cellulitis and abscess have been described in the literature as complication that usually occur secondary to infection in the maxillary, ethmoidal, and frontal sinuses. If left untreated, it can lead to blindness, cavernous sinus thrombosis, meningitis, or cerebral abscess. Orbital fractures are a common sequela of blunt orbital trauma, but are only rarely associated with orbital cellulitis. So, the authors present rare orbital cellulitis after orbital blow-out fracture. A 55-year-old Asian complains of severe orbital swelling and pain on the left side. These symptoms had started 2 days earlier and worsened within the 24 hours before hospital admission resulting in visual disturbances such as diplopia and photophobia. Contrast-enhanced computed tomography scan showed considerable soft tissue swelling and abscess formation on the left side. Patient was subjected to surgical drainage under general anesthesia in the operation room. In this case, the postoperative period was uneventful and the rapid improvement of symptoms was remarkable. In conclusion, the abscess of the orbit is a surgical emergency in patients whose impairment of vision or ocular symptoms cannot be controlled with medical therapy using antibiotics. In our case, orbital cellulitis can occur after blunt orbital trauma without predisposing sinusitis. Early and prompt diagnosis and surgical drainage before severe loss of visual acuity rescue or recover the vision in case of orbital cellulitis.

  3. Microcephaly Case Fatality Rate Associated with Zika Virus Infection in Brazil: Current Estimates.

    Science.gov (United States)

    Cunha, Antonio José Ledo Alves da; de Magalhães-Barbosa, Maria Clara; Lima-Setta, Fernanda; Medronho, Roberto de Andrade; Prata-Barbosa, Arnaldo

    2017-05-01

    Considering the currently confirmed cases of microcephaly and related deaths associated with Zika virus in Brazil, the estimated case fatality rate is 8.3% (95% confidence interval: 7.2-9.6). However, a third of the reported cases remain under investigation. If the confirmation rates of cases and deaths are the same in the future, the estimated case fatality rate will be as high as 10.5% (95% confidence interval: 9.5-11.7).

  4. Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study

    Science.gov (United States)

    2015-11-05

    Naval Health Research Center Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study Cynthia J...undertaken to determine whether medical encounter data can be used to estimate rates of nonfatal motor vehicle crashes (MVCs) among U.S. Navy and Marine...be involved in severe crashes. Motor vehicle crash rates among junior enlisted personnel were higher in the Navy than in the Marine Corps (2.7% vs

  5. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality

  6. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  7. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    Science.gov (United States)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  8. Montara blowout - what went wrong? what are the lessons for industry and regulators?

    Energy Technology Data Exchange (ETDEWEB)

    Cutler, Jane [NOPSA Canada (Canada)

    2010-07-01

    PTTEP owns and operates the Montara development project which is situated in the Timor Sea. There are over 37 million barrels of oil reserves located about 650 km from Darwin and in 77m of water. This development is composed of 4 producing wells and 1 gas re-injection well; production was to be from an unmanned wellhead platform. Initial drilling on H1 started in January 2009 and a blowout occurred on this well on August 21st, 2009. The aim of this paper is to present the causes of this accident and the implications for the industry, governments and regulators, including Australian regulators. It was shown that the blowout resulted from a systemic failure by PTTEP to manage the integrity of the well. This incident put the offshore oil and gas industry in the spotlight. This paper presented the blowout which occurred in the Montara development project, its causes, and its consequences.

  9. Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.

    Science.gov (United States)

    Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen

    2012-01-01

    An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    Energy Technology Data Exchange (ETDEWEB)

    Vugrin, Eric D.; McKenna, Sean Andrew (Sandia National Laboratories, Albuquerque, NM); Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  11. Estimation of the rate of energy production of rat mast cells in vitro

    DEFF Research Database (Denmark)

    Johansen, Torben

    1983-01-01

    Rat mast cells were treated with glycolytic and respiratory inhibitors. The rate of adenosine triphosphate depletion of cells incubated with both types of inhibitors and the rate of lactate produced in presence of antimycin A and glucose were used to estimate the rate of oxidative and glycolytic...

  12. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Wen-Lih; Yang, Yu-Ching; Chang, Win-Jin; Lee, Haw-Long [Clean Energy Center, Department of Mechanical Engineering, Kun Shan University, Yung-Kang City, Tainan 710-03 (China)

    2008-08-15

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study. (author)

  13. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  14. The functional outcome of blow-out fractures managed surgically and conservatively

    DEFF Research Database (Denmark)

    Felding, Ulrik Ascanius; Rasmussen, Janne; Toft, Peter Bjerre

    2016-01-01

    The proportion of orbital blow-out fractures (BOFs) which are operated upon varies. The purpose of this study was to determine the treatment pattern of BOFs at our tertiary trauma centre and to evaluate the functional outcomes in patients according to whether they were managed surgically or conse......The proportion of orbital blow-out fractures (BOFs) which are operated upon varies. The purpose of this study was to determine the treatment pattern of BOFs at our tertiary trauma centre and to evaluate the functional outcomes in patients according to whether they were managed surgically...

  15. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  16. Estimation of glucose rate of appearance from cgs and subcutaneous insulin delivery in type 1 diabetes

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2017-08-31

    Method and System for providing estimates of Glucose Rate of Appearance from the intestine (GRA) using continuous glucose sensor measurements (CGS) taken from the subcutaneous of a diabetes patient and the amount of insulin administered to the patient.

  17. Estimates of Radiation Dose Rates Near Large Diameter Sludge Containers in T Plant

    CERN Document Server

    Himes, D A

    2002-01-01

    Dose rates in T Plant canyon during the handling and storage of large diameter storage containers of K Basin sludge were estimated. A number of different geometries were considered from which most operational situations of interest can be constructed.

  18. Mesocosms adrift: a method to estimate fish egg and larvae mortality rates

    National Research Council Canada - National Science Library

    Houde, E.D; Gamble, J.C; Dorsey, S.E; Cowan, J.H

    1993-01-01

    Our objective was to develop a method to deploy mesocosms from a research vessel that would allow estimates of mortality rates of fish eggs and larvae to be obtained over periods of one to three days...

  19. Estimation of evaporation rates over the Arabian Sea from Satellite data

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, M.V.; RameshBabu, V.; Rao, L.V.G.; Sastry, J.S.

    Utilizing both the SAMIR brightness temperatures of Bhaskara 2 and GOSSTCOMP charts of NOAA satellite series, the evaporation rates over the Arabian Sea for June 1982 are estimated through the bulk aerodynamic method. The spatial distribution...

  20. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    Precise estimates of disease transmission rates are critical for epidemiological simulation models. Most often these rates must be estimated from longitudinal field data, which are costly and time-consuming to conduct. Consequently, measures to reduce cost like increased sampling intervals...... the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...

  1. On-line biomass estimation using a modified oxygen utilization rate

    Energy Technology Data Exchange (ETDEWEB)

    Petkov, S.B. [Minnesota Univ., Duluth, MN (United States). Dept. of Chemical Engineering; Davis, R.A. [Minnesota Univ., Duluth, MN (United States). Dept. of Chemical Engineering

    1996-06-01

    A procedure for estimating biomass during batch fermentation from on-line gas analysis is presented. First, the respiratory quotient was used to determine the fraction of the total oxygen utiliation rate required for cell maintenance and growth versus product synthesis. The modified oxygen utilization rate was then used to estimate biomass on-line by integrating the oxygen balance for cell synthesis-maintenance. The method is illustrated for the case of L-lysine synthesis by Corynebacterium glutamicum. (orig.)

  2. Exchange rate pass-through to various price indices: Empirical estimation using vector error correction models

    OpenAIRE

    Bachmann, Andreas

    2012-01-01

    The extent to which exchange rate fluctuations are passed through to domestic prices is of high relevance for open economies and for monetary authorities targeting price stability. Existing empirical studies estimating the exchange rate pass-through for Switzerland are based on either single equation estimation or on VAR models. However, these approaches feature some major drawbacks. The former cannot account for dynamic interactions between the time series and both methods disregard long-run...

  3. How badly are we doing? Estimating misclassification rates of shallow landslide susceptibility maps

    Science.gov (United States)

    Papritz, Andreas; von Ruette, Jonas; Lehmann, Peter; Rickli, Christian; Or, Dani

    2010-05-01

    An important motivation for continuing efforts in landslide susceptibility mapping is the need for reliable maps of landslide-prone areas. 'Reliable' means that a map should not systematically under- or overpredict landslide incidence and provide a fair measure of its predictive power. Probabilistic susceptibility maps may be generated by various statistical methods (logistic regression, neural networks, classification trees, etc.). These methods must be 'trained' with data on past landslide occurrence and information about conditioning features (terrain, geology, land use, vegetation, soil, ...). Once trained, most approaches fit observed landslide incidences in training areas reasonably well. If probabilistic predictions are thresholded, the error rate (total percentage of misclassifications), the true positive rate (sensitivity) and false positive rate (1- specificity) - which form the receiver operating characteristics curve (ROC) when plotted against each other for several thresholds - provide, seemingly, a favourable picture of the predictive power of the methods. However, these apparent misclassification rates give too small estimates of the true rates. Remedy for bias is cross-validation, which provides nearly unbiased estimates, but suffers from large random variation. The large variance of cross-validation estimates can be mitigated by bootstrapping. Efron and Tibshirani [1] proposed the .632+ bootstrap for estimating the true error rate. Adler and Lausen [2] extended the method for estimating ROC curves. We use the .632+ bootstrap to estimate misclassification rates (ROC curve, bias score [3]) of landslide susceptibility maps, generated by logistic regression and a random forest classifier. The methods are trained with data on incidence of shallow landslides, released in a pre-alpine catchment in Switzerland during a heavy rainfall in summer 2005. Geomorphological terrain attributes and information on land use are used as conditioning features. Two event

  4. Estimating blue whale skin isotopic incorporation rates and baleen growth rates: Implications for assessing diet and movement patterns in mysticetes.

    Science.gov (United States)

    Busquets-Vass, Geraldine; Newsome, Seth D; Calambokidis, John; Serra-Valente, Gabriela; Jacobsen, Jeff K; Aguíñiga-García, Sergio; Gendron, Diane

    2017-01-01

    Stable isotope analysis in mysticete skin and baleen plates has been repeatedly used to assess diet and movement patterns. Accurate interpretation of isotope data depends on understanding isotopic incorporation rates for metabolically active tissues and growth rates for metabolically inert tissues. The aim of this research was to estimate isotopic incorporation rates in blue whale skin and baleen growth rates by using natural gradients in baseline isotope values between oceanic regions. Nitrogen (δ15N) and carbon (δ13C) isotope values of blue whale skin and potential prey were analyzed from three foraging zones (Gulf of California, California Current System, and Costa Rica Dome) in the northeast Pacific from 1996-2015. We also measured δ15N and δ13C values along the lengths of baleen plates collected from six blue whales stranded in the 1980s and 2000s. Skin was separated into three strata: basale, externum, and sloughed skin. A mean (±SD) skin isotopic incorporation rate of 163±91 days was estimated by fitting a generalized additive model of the seasonal trend in δ15N values of skin strata collected in the Gulf of California and the California Current System. A mean (±SD) baleen growth rate of 15.5±2.2 cm y-1 was estimated by using seasonal oscillations in δ15N values from three whales. These oscillations also showed that individual whales have a high fidelity to distinct foraging zones in the northeast Pacific across years. The absence of oscillations in δ15N values of baleen sub-samples from three male whales suggests these individuals remained within a specific zone for several years prior to death. δ13C values of both whale tissues (skin and baleen) and potential prey were not distinct among foraging zones. Our results highlight the importance of considering tissue isotopic incorporation and growth rates when studying migratory mysticetes and provide new insights into the individual movement strategies of blue whales.

  5. Calculating site-specific evolutionary rates at the amino-acid or codon level yields similar rate estimates.

    Science.gov (United States)

    Sydykova, Dariya K; Wilke, Claus O

    2017-01-01

    Site-specific evolutionary rates can be estimated from codon sequences or from amino-acid sequences. For codon sequences, the most popular methods use some variation of the dN∕dS ratio. For amino-acid sequences, one widely-used method is called Rate4Site, and it assigns a relative conservation score to each site in an alignment. How site-wise dN∕dS values relate to Rate4Site scores is not known. Here we elucidate the relationship between these two rate measurements. We simulate sequences with known dN∕dS, using either dN∕dS models or mutation-selection models for simulation. We then infer Rate4Site scores on the simulated alignments, and we compare those scores to either true or inferred dN∕dS values on the same alignments. We find that Rate4Site scores generally correlate well with true dN∕dS, and the correlation strengths increase in alignments with greater sequence divergence and more taxa. Moreover, Rate4Site scores correlate very well with inferred (as opposed to true) dN∕dS values, even for small alignments with little divergence. Finally, we verify this relationship between Rate4Site and dN∕dS in a variety of empirical datasets. We conclude that codon-level and amino-acid-level analysis frameworks are directly comparable and yield very similar inferences.

  6. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  7. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  8. Constrained least squares methods for estimating reaction rate constants from spectroscopic data

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2002-01-01

    Model errors, experimental errors and instrumental noise influence the accuracy of reaction rate constant estimates obtained from spectral data recorded in time during a chemical reaction. In order to improve the accuracy, which can be divided into the precision and bias of reaction rate constant

  9. Neural estimation of kinetic rate constants from dynamic PET-scans

    DEFF Research Database (Denmark)

    Fog, Torben L.; Nielsen, Lars Hupfeldt; Hansen, Lars Kai

    1994-01-01

    A feedforward neural net is trained to invert a simple three compartment model describing the tracer kinetics involved in the metabolism of [18F]fluorodeoxyglucose in the human brain. The network can estimate rate constants from positron emission tomography sequences and is about 50 times faster...... than direct fitting of rate constants using the parametrized transients of the compartment model...

  10. Estimating morbidity rates from electronic medical records in general practice. Evaluation of a grouping system.

    NARCIS (Netherlands)

    Biermans, M.C.J.; Verheij, R.A.; Bakker, D.H. de; Zielhuis, G.A.; Robbe, P.F.

    2008-01-01

    OBJECTIVES: In this study, we evaluated the internal validity of EPICON, an application for grouping ICPC-coded diagnoses from electronic medical records into episodes of care. These episodes are used to estimate morbidity rates in general practice. METHODS: Morbidity rates based on EPICON were

  11. estimated glomerular filtration rate and risk of survival in acute stroke

    African Journals Online (AJOL)

    2014-03-03

    Mar 3, 2014 ... rate or glomerular filtration barrier and occurrence of stroke. Arch Neurol 2008; 65: 934-938. 2. Matsushita K,Mahmoodi BK, Woodward M,. Emberson JR, Jafar TH, Jee SH et al. Comparison of risk prediction using the CKD-EPI equation and the. MDRD study equation for estimated glomerular filtration rate.

  12. Estimating the spread rate of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer using fluorescence

    Science.gov (United States)

    Toni Antikainen; Anti Rohumaa; Christopher G. Hunt; Mari Levirinne; Mark Hughes

    2015-01-01

    In plywood production, human operators find it difficult to precisely monitor the spread rate of adhesive in real-time. In this study, macroscopic fluorescence was used to estimate spread rate (SR) of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer. This method could be an option when developing automated real-time SR measurement for...

  13. Estimating wildland fire rate of spread in a spatially nonuniform environment

    Science.gov (United States)

    Francis M Fujioka

    1985-01-01

    Estimating rate of fire spread is a key element in planning for effective fire control. Land managers use the Rothermel spread model, but the model assumptions are violated when fuel, weather, and topography are nonuniform. This paper compares three averaging techniques--arithmetic mean of spread rates, spread based on mean fuel conditions, and harmonic mean of spread...

  14. Use of Pyranometers to Estimate PV Module Degradation Rates in the Field

    Energy Technology Data Exchange (ETDEWEB)

    Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit

    2016-06-05

    This poster provides an overview of a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.

  15. Use of Pyranometers to Estimate PV Module Degradation Rates in the Field: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit

    2016-08-01

    This paper describes a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.

  16. Using Time-Structured Data to Estimate Evolutionary Rates of Double-Stranded DNA Viruses

    Science.gov (United States)

    Firth, Cadhla; Kitchen, Andrew; Shapiro, Beth; Suchard, Marc A.; Holmes, Edward C.; Rambaut, Andrew

    2010-01-01

    Double-stranded (ds) DNA viruses are often described as evolving through long-term codivergent associations with their hosts, a pattern that is expected to be associated with low rates of nucleotide substitution. However, the hypothesis of codivergence between dsDNA viruses and their hosts has rarely been rigorously tested, even though the vast majority of nucleotide substitution rate estimates for dsDNA viruses are based upon this assumption. It is therefore important to estimate the evolutionary rates of dsDNA viruses independent of the assumption of host-virus codivergence. Here, we explore the use of temporally structured sequence data within a Bayesian framework to estimate the evolutionary rates for seven human dsDNA viruses, including variola virus (VARV) (the causative agent of smallpox) and herpes simplex virus-1. Our analyses reveal that although the VARV genome is likely to evolve at a rate of approximately 1 × 10−5 substitutions/site/year and hence approaching that of many RNA viruses, the evolutionary rates of many other dsDNA viruses remain problematic to estimate. Synthetic data sets were constructed to inform our interpretation of the substitution rates estimated for these dsDNA viruses and the analysis of these demonstrated that given a sequence data set of appropriate length and sampling depth, it is possible to use time-structured analyses to estimate the substitution rates of many dsDNA viruses independently from the assumption of host-virus codivergence. Finally, the discovery that some dsDNA viruses may evolve at rates approaching those of RNA viruses has important implications for our understanding of the long-term evolutionary history and emergence potential of this major group of viruses. PMID:20363828

  17. Blow-out of nonpremixed turbulent jet flames at sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang

    2016-12-09

    Blow-out limits of nonpremixed turbulent jet flames in quiescent air at sub-atmospheric pressures (50–100 kPa) were studied experimentally using propane fuel with nozzle diameters ranging 0.8–4 mm. Results showed that the fuel jet velocity at blow-out limit increased with increasing ambient pressure and nozzle diameter. A Damköhler (Da) number based model was adopted, defined as the ratio of characteristic mixing time and characteristic reaction time, to include the effect of pressure considering the variations in laminar burning velocity and thermal diffusivity with pressure. The critical lift-off height at blow-out, representing a characteristic length scale for mixing, had a linear relationship with the theoretically predicted stoichiometric location along the jet axis, which had a weak dependence on ambient pressure. The characteristic mixing time (critical lift-off height divided by jet velocity) adjusted to the characteristic reaction time such that the critical Damköhler at blow-out conditions maintained a constant value when varying the ambient pressure.

  18. Spontaneous Mutation Rate of Measles Virus: Direct Estimation Based on Mutations Conferring Monoclonal Antibody Resistance

    Science.gov (United States)

    Schrag, Stephanie J.; Rota, Paul A.; Bellini, William J.

    1999-01-01

    High mutation rates typical of RNA viruses often generate a unique viral population structure consisting of a large number of genetic microvariants. In the case of viral pathogens, this can result in rapid evolution of antiviral resistance or vaccine-escape mutants. We determined a direct estimate of the mutation rate of measles virus, the next likely target for global elimination following poliovirus. In a laboratory tissue culture system, we used the fluctuation test method of estimating mutation rate, which involves screening a large number of independent populations initiated by a small number of viruses each for the presence or absence of a particular single point mutation. The mutation we focused on, which can be screened for phenotypically, confers resistance to a monoclonal antibody (MAb 80-III-B2). The entire H gene of a subset of mutants was sequenced to verify that the resistance phenotype was associated with single point mutations. The epitope conferring MAb resistance was further characterized by Western blot analysis. Based on this approach, measles virus was estimated to have a mutation rate of 9 × 10−5 per base per replication and a genomic mutation rate of 1.43 per replication. The mutation rates we estimated for measles virus are comparable to recent in vitro estimates for both poliovirus and vesicular stomatitis virus. In the field, however, measles virus shows marked genetic stability. We briefly discuss the evolutionary implications of these results. PMID:9847306

  19. New estimates of silicate weathering rates and their uncertainties in global rivers

    Science.gov (United States)

    Moon, Seulgi; Chamberlain, C. P.; Hilley, G. E.

    2014-06-01

    This study estimated the catchment- and global-scale weathering rates of silicate rocks from global rivers using global compilation datasets from the GEMS/Water and HYBAM. These datasets include both time-series of chemical concentrations of major elements and synchronous discharge. Using these datasets, we first examined the sources of uncertainties in catchment and global silicate weathering rates. Then, we proposed future sampling strategies and geochemical analyses to estimate accurate silicate weathering rates in global rivers and to reduce uncertainties in their estimates. For catchment silicate weathering rates, we considered uncertainties due to sampling frequency and variability in river discharge, concentration, and attribution of weathering to different chemical sources. Our results showed that uncertainties in catchment-scale silicate weathering rates were due mostly to the variations in discharge and cation fractions from silicate substrates. To calculate unbiased silicate weathering rates accounting for the variations from discharge and concentrations, we suggest that at least 10 and preferably ∼40 temporal chemical data points with synchronous discharge from each river are necessary. For the global silicate weathering rate, we examined uncertainties from infrequent sampling within an individual river, the extrapolation from limited rivers to a global flux, and the inverse model selections for source differentiation. For this weathering rate, we found that the main uncertainty came from the extrapolation to the global flux and the model configurations of source differentiation methods. This suggests that to reduce the uncertainties in the global silicate weathering rates, coverage of synchronous datasets of river chemistry and discharge to rivers from tectonically active regions and volcanic provinces must be extended, and catchment-specific silicate end-members for those rivers must be characterized. With current available synchronous datasets, we

  20. Estimation of Respiratory Rates Using the Built-in Microphone of a Smartphone or Headset.

    Science.gov (United States)

    Nam, Yunyoung; Reyes, Bersain A; Chon, Ki H

    2016-11-01

    This paper proposes accurate respiratory rate estimation using nasal breath sound recordings from a smartphone. Specifically, the proposed method detects nasal airflow using a built-in smartphone microphone or a headset microphone placed underneath the nose. In addition, we also examined if tracheal breath sounds recorded by the built-in microphone of a smartphone placed on the paralaryngeal space can also be used to estimate different respiratory rates ranging from as low as 6 breaths/min to as high as 90 breaths/min. The true breathing rates were measured using inductance plethysmography bands placed around the chest and the abdomen of the subject. Inspiration and expiration were detected by averaging the power of nasal breath sounds. We investigated the suitability of using the smartphone-acquired breath sounds for respiratory rate estimation using two different spectral analyses of the sound envelope signals: The Welch periodogram and the autoregressive spectrum. To evaluate the performance of the proposed methods, data were collected from ten healthy subjects. For the breathing range studied (6-90 breaths/min), experimental results showed that our approach achieves an excellent performance accuracy for the nasal sound as the median errors were less than 1% for all breathing ranges. The tracheal sound, however, resulted in poor estimates of the respiratory rates using either spectral method. For both nasal and tracheal sounds, significant estimation outliers resulted for high breathing rates when subjects had nasal congestion, which often resulted in the doubling of the respiratory rates. Finally, we show that respiratory rates from the nasal sound can be accurately estimated even if a smartphone's microphone is as far as 30 cm away from the nose.

  1. Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.

    Science.gov (United States)

    Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk

    2017-03-01

    Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.

  2. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    Science.gov (United States)

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates

  3. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation.

    Science.gov (United States)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed; Pihl, Michael Johannes; Hansen, Kristoffer Lindskov; Stuart, Matthias Bo; Thomsen, Carsten; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-03-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames/s in a plane, and was used to estimate 3-D vector flow in a cross-sectional image plane. The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared with the expected 79.8 L/min, and to 2.68 ± 0.04 mL/stroke in the pulsating environment compared with the expected 2.57 ± 0.08 mL/stroke. Flow rates estimated in the common carotid artery of a healthy volunteer are compared with magnetic resonance imaging (MRI) measured flow rates using a 1-D through-plane velocity sequence. Mean flow rates were 333 ± 31 mL/min for the presented method and 346 ± 2 mL/min for the MRI measurements.

  4. Variance Estimation of Change in Poverty Rates: an Application to the Turkish EU-SILC Survey

    Directory of Open Access Journals (Sweden)

    Oguz Alper Melike

    2015-06-01

    Full Text Available Interpreting changes between point estimates at different waves may be misleading if we do not take the sampling variation into account. It is therefore necessary to estimate the standard error of these changes in order to judge whether or not the observed changes are statistically significant. This involves the estimation of temporal correlations between cross-sectional estimates, because correlations play an important role in estimating the variance of a change in the cross-sectional estimates. Standard estimators for correlations cannot be used because of the rotation used in most panel surveys, such as the European Union Statistics on Income and Living Conditions (EU-SILC surveys. Furthermore, as poverty indicators are complex functions of the data, they require special treatment when estimating their variance. For example, poverty rates depend on poverty thresholds which are estimated from medians. We propose using a multivariate linear regression approach to estimate correlations by taking into account the variability of the poverty threshold. We apply the approach proposed to the Turkish EU-SILC survey data.

  5. Detecting Identity by Descent and Estimating Genotype Error Rates in Sequence Data

    Science.gov (United States)

    Browning, Brian L.; Browning, Sharon R.

    2013-01-01

    Existing methods for identity by descent (IBD) segment detection were designed for SNP array data, not sequence data. Sequence data have a much higher density of genetic variants and a different allele frequency distribution, and can have higher genotype error rates. Consequently, best practices for IBD detection in SNP array data do not necessarily carry over to sequence data. We present a method, IBDseq, for detecting IBD segments in sequence data and a method, SEQERR, for estimating genotype error rates at low-frequency variants by using detected IBD. The IBDseq method estimates probabilities of genotypes observed with error for each pair of individuals under IBD and non-IBD models. The ratio of estimated probabilities under the two models gives a LOD score for IBD. We evaluate several IBD detection methods that are fast enough for application to sequence data (IBDseq, Beagle Refined IBD, PLINK, and GERMLINE) under multiple parameter settings, and we show that IBDseq achieves high power and accuracy for IBD detection in sequence data. The SEQERR method estimates genotype error rates by comparing observed and expected rates of pairs of homozygote and heterozygote genotypes at low-frequency variants in IBD segments. We demonstrate the accuracy of SEQERR in simulated data, and we apply the method to estimate genotype error rates in sequence data from the UK10K and 1000 Genomes projects. PMID:24207118

  6. A Bayesian framework to estimate diversification rates and their variation through time and space

    Directory of Open Access Journals (Sweden)

    Silvestro Daniele

    2011-10-01

    Full Text Available Abstract Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae and Lupinus (Fabaceae. In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.

  7. TR-BREATH: Time-Reversal Breathing Rate Estimation and Detection.

    Science.gov (United States)

    Chen, Chen; Han, Yi; Chen, Yan; Lai, Hung-Quoc; Zhang, Feng; Wang, Beibei; Liu, K J Ray

    2017-04-28

    In this paper, we introduce TR-BREATH, a timereversal (TR) based contact-free breathing monitoring system. It is capable of breathing detection and multi-person breathing rate estimation within a short period of time using off-the-shelf WiFi devices. The proposed system exploits the channel state information (CSI) to capture the miniature variations in the environment caused by breathing. To magnify the CSI variations, TRBREATH projects CSIs into the TR resonating strength (TRRS) feature space and analyzes the TRRS by the Root-MUSIC and affinity propagation algorithms. Extensive experiment results indoor demonstrate a perfect detection rate of breathing. With only 10 seconds of measurement, a mean accuracy of 99% can be obtained for single-person breathing rate estimation under the non-line-of-sight (NLOS) scenario. Furthermore, it achieves a mean accuracy of 98:65% in breathing rate estimation for a dozen people under the line-of-sight (LOS) scenario and a mean accuracy of 98:07% in breathing rate estimation of 9 people under the NLOS scenario, both with 63 seconds of measurement. Moreover, TR-BREATH can estimate the number of people with an error around 1. We also demonstrate that TR-BREATH is robust against packet loss and motions. With the prevailing of WiFi, TR-BREATH can be applied for in-home and real-time breathing monitoring.

  8. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  9. Estimation of age-specific rates of reactivation and immune boosting of the varicella zoster virus

    Directory of Open Access Journals (Sweden)

    Isabella Marinelli

    2017-06-01

    Full Text Available Studies into the impact of vaccination against the varicella zoster virus (VZV have increasingly focused on herpes zoster (HZ, which is believed to be increasing in vaccinated populations with decreasing infection pressure. This idea can be traced back to Hope-Simpson's hypothesis, in which a person's immune status determines the likelihood that he/she will develop HZ. Immunity decreases over time, and can be boosted by contact with a person experiencing varicella (exogenous boosting or by a reactivation attempt of the virus (endogenous boosting. Here we use transmission models to estimate age-specific rates of reactivation and immune boosting, exogenous as well as endogenous, using zoster incidence data from the Netherlands (2002–2011, n = 7026. The boosting and reactivation rates are estimated with splines, enabling these quantities to be optimally informed by the data. The analyses show that models with high levels of exogenous boosting and estimated or zero endogenous boosting, constant rate of loss of immunity, and reactivation rate increasing with age (to more than 5% per year in the elderly give the best fit to the data. Estimates of the rates of immune boosting and reactivation are strongly correlated. This has important implications as these parameters determine the fraction of the population with waned immunity. We conclude that independent evidence on rates of immune boosting and reactivation in persons with waned immunity are needed to robustly predict the impact of varicella vaccination on the incidence of HZ.

  10. Estimating the Backup Reaction Wheel Orientation Using Reaction Wheel Spin Rates Flight Telemetry from a Spacecraft

    Science.gov (United States)

    Rizvi, Farheen

    2013-01-01

    A report describes a model that estimates the orientation of the backup reaction wheel using the reaction wheel spin rates telemetry from a spacecraft. Attitude control via the reaction wheel assembly (RWA) onboard a spacecraft uses three reaction wheels (one wheel per axis) and a backup to accommodate any wheel degradation throughout the course of the mission. The spacecraft dynamics prediction depends upon the correct knowledge of the reaction wheel orientations. Thus, it is vital to determine the actual orientation of the reaction wheels such that the correct spacecraft dynamics can be predicted. The conservation of angular momentum is used to estimate the orientation of the backup reaction wheel from the prime and backup reaction wheel spin rates data. The method is applied in estimating the orientation of the backup wheel onboard the Cassini spacecraft. The flight telemetry from the March 2011 prime and backup RWA swap activity on Cassini is used to obtain the best estimate for the backup reaction wheel orientation.

  11. Estimating blue whale skin isotopic incorporation rates and baleen growth rates: Implications for assessing diet and movement patterns in mysticetes.

    Directory of Open Access Journals (Sweden)

    Geraldine Busquets-Vass

    Full Text Available Stable isotope analysis in mysticete skin and baleen plates has been repeatedly used to assess diet and movement patterns. Accurate interpretation of isotope data depends on understanding isotopic incorporation rates for metabolically active tissues and growth rates for metabolically inert tissues. The aim of this research was to estimate isotopic incorporation rates in blue whale skin and baleen growth rates by using natural gradients in baseline isotope values between oceanic regions. Nitrogen (δ15N and carbon (δ13C isotope values of blue whale skin and potential prey were analyzed from three foraging zones (Gulf of California, California Current System, and Costa Rica Dome in the northeast Pacific from 1996-2015. We also measured δ15N and δ13C values along the lengths of baleen plates collected from six blue whales stranded in the 1980s and 2000s. Skin was separated into three strata: basale, externum, and sloughed skin. A mean (±SD skin isotopic incorporation rate of 163±91 days was estimated by fitting a generalized additive model of the seasonal trend in δ15N values of skin strata collected in the Gulf of California and the California Current System. A mean (±SD baleen growth rate of 15.5±2.2 cm y-1 was estimated by using seasonal oscillations in δ15N values from three whales. These oscillations also showed that individual whales have a high fidelity to distinct foraging zones in the northeast Pacific across years. The absence of oscillations in δ15N values of baleen sub-samples from three male whales suggests these individuals remained within a specific zone for several years prior to death. δ13C values of both whale tissues (skin and baleen and potential prey were not distinct among foraging zones. Our results highlight the importance of considering tissue isotopic incorporation and growth rates when studying migratory mysticetes and provide new insights into the individual movement strategies of blue whales.

  12. Estimating blue whale skin isotopic incorporation rates and baleen growth rates: Implications for assessing diet and movement patterns in mysticetes

    Science.gov (United States)

    Busquets-Vass, Geraldine; Newsome, Seth D.; Calambokidis, John; Serra-Valente, Gabriela; Jacobsen, Jeff K.; Aguíñiga-García, Sergio; Gendron, Diane

    2017-01-01

    Stable isotope analysis in mysticete skin and baleen plates has been repeatedly used to assess diet and movement patterns. Accurate interpretation of isotope data depends on understanding isotopic incorporation rates for metabolically active tissues and growth rates for metabolically inert tissues. The aim of this research was to estimate isotopic incorporation rates in blue whale skin and baleen growth rates by using natural gradients in baseline isotope values between oceanic regions. Nitrogen (δ15N) and carbon (δ13C) isotope values of blue whale skin and potential prey were analyzed from three foraging zones (Gulf of California, California Current System, and Costa Rica Dome) in the northeast Pacific from 1996–2015. We also measured δ15N and δ13C values along the lengths of baleen plates collected from six blue whales stranded in the 1980s and 2000s. Skin was separated into three strata: basale, externum, and sloughed skin. A mean (±SD) skin isotopic incorporation rate of 163±91 days was estimated by fitting a generalized additive model of the seasonal trend in δ15N values of skin strata collected in the Gulf of California and the California Current System. A mean (±SD) baleen growth rate of 15.5±2.2 cm y-1 was estimated by using seasonal oscillations in δ15N values from three whales. These oscillations also showed that individual whales have a high fidelity to distinct foraging zones in the northeast Pacific across years. The absence of oscillations in δ15N values of baleen sub-samples from three male whales suggests these individuals remained within a specific zone for several years prior to death. δ13C values of both whale tissues (skin and baleen) and potential prey were not distinct among foraging zones. Our results highlight the importance of considering tissue isotopic incorporation and growth rates when studying migratory mysticetes and provide new insights into the individual movement strategies of blue whales. PMID:28562625

  13. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  14. Fuzzy entropy based motion artifact detection and pulse rate estimation for fingertip photoplethysmography.

    Science.gov (United States)

    Paradkar, Neeraj; Chowdhury, Shubhajit Roy

    2014-01-01

    The paper presents a fingertip photoplethysmography (PPG) based technique to estimate the pulse rate of the subject. The PPG signal obtained from a pulse oximeter is used for the analysis. The input samples are corrupted with motion artifacts due to minor motion of the subjects. Entropy measure of the input samples is used to detect the motion artifacts and estimate the pulse rate. A three step methodology is adapted to identify and classify signal peaks as true systolic peaks or artifact. CapnoBase database and CSL Benchmark database are used to analyze the technique and pulse rate estimation was performed with positive predictive value and sensitivity figures of 99.84% and 99.32% respectively for CapnoBase and 98.83% and 98.84% for CSL database respectively.

  15. Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models

    Science.gov (United States)

    Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.

    2008-01-01

    Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483

  16. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  17. Order-of-Magnitude Estimate of Fast Neutron Recoil Rates in Proposed Neutrino Detector at SNS

    Energy Technology Data Exchange (ETDEWEB)

    Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2006-02-01

    Yuri Efremenko (UT-K) and Kate Scholberg (Duke) indicated, during discussions on 12 January 2006 with the SNS Neutronics Team, interest in a new type of neutrino detector to be placed within the proposed neutrino bunker at SNS, near beam-line 18, against the RTBT. The successful operation of this detector and its associated experiments would require fast-neutron recoil rates of approximately one event per day of operation or less. To this end, the author has attempted the following order-of-magnitude estimate of this recoil rate in order to judge whether or not a full calculation effort is needed or justified. For the purposes of this estimate, the author considers a one-dimensional slab geometry, in which fast and high-energy neutrons making up the general background in the target building are incident upon one side of an irbon slab. This iron slab represents the neutrino bunker walls. If we assume that a significant fraction of the dose rate throughout the target building is due to fast or high-energy neutrons, we can estimate the flux of such neutrons based upon existing shielding calculations performed for radiation protection purposes. In general, the dose rates within the target building are controlled to be less than 0.25 mrem per hour. A variety of calculations have indicated that these dose rates have significant fast and high-energy neutron components. Thus they can estimate the fast neutron flux incident on the neutrino bunker, and thereby the fast neutron flux inside that bunker. Finally, they can estimate the neutron recoil rate within a nominal detector volume. Such an estimate is outlined in Table 1.

  18. Need for Estimating the Glomerular Filtration Rate to Assess Renal Function

    Directory of Open Access Journals (Sweden)

    José Antonio Chipi Cabrera

    2013-11-01

    Full Text Available Background: plasma creatinine alone is not useful for assessing renal function; patients with normal creatinine values can experience a significant decline in the glomerular filtration rate, hindering early detection of renal function impairment. Objective: to assess renal function by determining the plasma creatinine compared to the glomerular filtration rate estimated through the Cockcroft -Gault, MDRD -4 and CKD- Epi formulas. Methods: the database of a population-based epidemiological study conducted in the Isle of Youth since November 2004 was used. It involved 897 patients, 342 women and 555 men. Plasma creatinine and glomerular filtration rate were estimated by means of 3 formulas. Renal function was considered normal when serum creatinine values were <123 µmol/l for women and <132 µmol/l for men and glomerular filtration rate > 60 ml/min. Results: plasma creatinine was stable in the four age groups, with a mean of 100.68 ± 38.01, glomerular filtration rate decreased with increasing age in the three formulas. Correlation coefficient between plasma creatinine values and glomerular filtration rate for each formula expressed a linear relationship with r [CG formula 0.639 (p = 0.000], [0.672 MDRD -4 (p = 0.000] and [0.939 CKD – Epi formula (p = 0.000]. Conclusions: the utility of the methods for estimating the glomerular filtration rate was demonstrated, leading to the detection of the renal function impairment before the serum creatinine level increases.

  19. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  20. Corrigendum to "Dune field reactivation from blowouts: Sevier Desert, UT, USA" [Aeolian Res. 11 (2013) 75-84

    Science.gov (United States)

    Barchyn, Thomas E.; Hugenholtz, Chris H.

    2016-06-01

    This corrigendum corrects an error made in the flux calculations in 'Dune field reactivation from blowouts: Sevier Desert, UT, USA'. The corrected data differ only slightly from the original publication and do not affect the conclusions of the paper.

  1. Non-linearities in estimators of trilinear gauge couplings that use total event rate

    CERN Document Server

    Terranova, F

    2000-01-01

    The total event rate of WW and single W production represents an important source of information on trilinear gauge couplings (TGC) at LEP 2. Present LEP analyses combine this information with the study of the four-fermion final-state angular distributions by means of an extended maximum likelihood (EML) estimator. In general, it can be shown that the inclusion of total event rate induces nonlinearities in the TGC estimators. In this letter, these biases are computed semi-analytically and calculations are carried out for typical LEP analyses at 161, 172 and 183 GeV. (9 refs).

  2. Using Final Kepler Catalog Completeness and Reliability Products in Exoplanet Occurrence Rate Estimates

    Science.gov (United States)

    Bryson, Steve; Burke, Christopher; Batalha, Natalie Marie; Thompson, Susan E.; Coughlin, Jeffrey; Christiansen, Jessie; Mullally, Fergal; Shabram, Megan; Kepler Team

    2018-01-01

    Burke et. al. 2015 presented an exoplanet occurrence rate estimate based on the Q1-Q16 Kepler Planet Candidate catalog. That catalog featured uniform planet candidate vetting and analytic approximations to the detection completeness (the fraction of true planets that would be detected) for each target star. We present an extension of that occurrence rate work using the final DR25 Kepler Planet Candidate catalog products, which uses higher-accuracy detection completeness data for each target star, and adds estimates of vetting completeness (the fraction of detected true planets correctly identified as planet candidates) and vetting reliability (the fraction of planet candidates that are true planets). These completeness and reliability products are based on synthetic manipulations of Kepler data, including transit injection, data scrambling, and inversion. We describe how each component is incorporated into the occurrence rate estimate, and how they impact the occurrence rate estimate both individually and in combination. We discuss the strengths and weaknesses of the completeness and reliability products and how they impact our confidence in the occurrence rate values uncertainties. This work is an example of how the community can use the DR25 completeness and reliability products, which are publicly available at the NASA Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu) and the Mikulski Archive for Space Telescopes (http://archive.stsci.edu/kepler).

  3. Estimation of age-specific incidence rates from cross-sectional survey data.

    Science.gov (United States)

    Roy, Jason; Stewart, Walter F

    2010-02-28

    Age-specific disease incidence rates are typically estimated from longitudinal data, where disease-free subjects are followed over time and incident cases are observed. However, longitudinal studies have substantial cost and time requirements, not to mention other challenges such as loss to follow up. Alternatively, cross-sectional data can be used to estimate age-specific incidence rates in a more timely and cost-effective manner. Such studies rely on self-report of onset age. Self-reported onset age is subject to measurement error and bias. In this paper, we use a Bayesian bivariate smoothing approach to estimate age-specific incidence rates from cross-sectional survey data. Rates are modeled as a smooth function of age and lag (difference between age and onset age), with larger values of lag effectively down weighted, as they are assumed to be less reliable. We conduct an extensive simulation study to investigate the extent to which measurement error and bias in the reported onset age affects inference using the proposed methods. We use data from a national headache survey to estimate age- and gender-specific migraine incidence rates. (c) 2010 John Wiley & Sons, Ltd.

  4. Can we estimate precipitation rate during snowfall using a scanning terrestrial LiDAR?

    Science.gov (United States)

    LeWinter, A. L.; Bair, E. H.; Davis, R. E.; Finnegan, D. C.; Gutmann, E. D.; Dozier, J.

    2012-12-01

    Accurate snowfall measurements in windy areas have proven difficult. To examine a new approach, we have installed an automatic scanning terrestrial LiDAR at Mammoth Mountain, CA. With this LiDAR, we have demonstrated effective snow depth mapping over a small study area of several hundred m2. The LiDAR also produces dense point clouds by detecting falling and blowing hydrometeors during storms. Daily counts of airborne detections from the LiDAR show excellent agreement with automated and manual snow water equivalent measurements, suggesting that LiDAR observations have the potential to directly estimate precipitation rate. Thus, we suggest LiDAR scanners offer advantages over precipitation radars, which could lead to more accurate precipitation rate estimates. For instance, uncertainties in mass-diameter and mass-fall speed relationships used in precipitation radar, combined with low reflectivity of snow in the microwave spectrum, produce errors of up to 3X in snowfall rates measured by radar. Since snow has more backscatter in the near-infrared wavelengths used by LiDAR compared to the wavelengths used by radar, and the LiDAR detects individual hydrometeors, our approach has more potential for directly estimating precipitation rate. A key uncertainty is hydrometeor mass. At our study site, we have also installed a Multi Angle Snowflake Camera (MASC) to measure size, fallspeed, and mass of individual hydrometeors. By combining simultaneous MASC and LiDAR measurements, we can estimate precipitation density and rate.

  5. A Low Cost Approach to Simultaneous Orbit, Attitude, and Rate Estimation Using an Extended Kalman Filter

    Science.gov (United States)

    Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack

    1998-01-01

    An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter, a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimate spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an Extended Kalman Filter. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.

  6. Estimated trichloroethene transformation rates due to naturally occurring biodegradation in a fractured-rock aquifer

    Science.gov (United States)

    Chapelle, Francis H.; Lacombe, Pierre J.; Bradley, Paul M.

    2012-01-01

    Rates of trichloroethene (TCE) mass transformed by naturally occurring biodegradation processes in a fractured rock aquifer underlying a former Naval Air Warfare Center (NAWC) site in West Trenton, New Jersey, were estimated. The methodology included (1) dividing the site into eight elements of equal size and vertically integrating observed concentrations of two daughter products of TCE biodegradation–cis-dichloroethene (cis-DCE) and chloride–using water chemistry data from a network of 88 observation wells; (2) summing the molar mass of cis-DCE, the first biodegradation product of TCE, to provide a probable underestimate of reductive biodegradation of TCE, (3) summing the molar mass of chloride, the final product of chlorinated ethene degradation, to provide a probable overestimate of overall biodegradation. Finally, lower and higher estimates of aquifer porosities and groundwater residence times were used to estimate a range of overall transformation rates. The highest TCE transformation rates estimated using this procedure for the combined overburden and bedrock aquifers was 945 kg/yr, and the lowest was 37 kg/yr. However, hydrologic considerations suggest that approximately 100 to 500 kg/yr is the probable range for overall TCE transformation rates in this system. Estimated rates of TCE transformation were much higher in shallow overburden sediments (approximately 100 to 500 kg/yr) than in the deeper bedrock aquifer (approximately 20 to 0.15 kg/yr), which reflects the higher porosity and higher contaminant mass present in the overburden. By way of comparison, pump-and-treat operations at the NAWC site are estimated to have removed between 1,073 and 1,565 kg/yr of TCE between 1996 and 2009.

  7. A new approach to estimate ice dynamic rates using satellite observations in East Antarctica

    Science.gov (United States)

    Kallenberg, Bianca; Tregoning, Paul; Fabian Hoffmann, Janosch; Hawkins, Rhys; Purcell, Anthony; Allgeyer, Sébastien

    2017-05-01

    Mass balance changes of the Antarctic ice sheet are of significant interest due to its sensitivity to climatic changes and its contribution to changes in global sea level. While regional climate models successfully estimate mass input due to snowfall, it remains difficult to estimate the amount of mass loss due to ice dynamic processes. It has often been assumed that changes in ice dynamic rates only need to be considered when assessing long-term ice sheet mass balance; however, 2 decades of satellite altimetry observations reveal that the Antarctic ice sheet changes unexpectedly and much more dynamically than previously expected. Despite available estimates on ice dynamic rates obtained from radar altimetry, information about ice sheet changes due to changes in the ice dynamics are still limited, especially in East Antarctica. Without understanding ice dynamic rates, it is not possible to properly assess changes in ice sheet mass balance and surface elevation or to develop ice sheet models. In this study we investigate the possibility of estimating ice sheet changes due to ice dynamic rates by removing modelled rates of surface mass balance, firn compaction, and bedrock uplift from satellite altimetry and gravity observations. With similar rates of ice discharge acquired from two different satellite missions we show that it is possible to obtain an approximation of the rate of change due to ice dynamics by combining altimetry and gravity observations. Thus, surface elevation changes due to surface mass balance, firn compaction, and ice dynamic rates can be modelled and correlated with observed elevation changes from satellite altimetry.

  8. Metabolic Rate Calibrates the Molecular Clock: Reconciling Molecular and Fossil Estimates of Evolutionary Divergence

    OpenAIRE

    Gillooly, James F.; Andrew P. Allen; West, Geoffrey B.; Brown, James H.

    2004-01-01

    Observations that rates of molecular evolution vary widely within and among lineages have cast doubts upon the existence of a single molecular clock. Differences in the timing of evolutionary events estimated from genetic and fossil evidence have raised further questions about the existence of molecular clocks and their use. Here we present a model of nucleotide substitution that combines new theory on metabolic rate with the now classic neutral theory of molecular evolution. The model quanti...

  9. A new approach to estimate ice dynamic rates using satellite observations in East Antarctica

    Directory of Open Access Journals (Sweden)

    B. Kallenberg

    2017-05-01

    Full Text Available Mass balance changes of the Antarctic ice sheet are of significant interest due to its sensitivity to climatic changes and its contribution to changes in global sea level. While regional climate models successfully estimate mass input due to snowfall, it remains difficult to estimate the amount of mass loss due to ice dynamic processes. It has often been assumed that changes in ice dynamic rates only need to be considered when assessing long-term ice sheet mass balance; however, 2 decades of satellite altimetry observations reveal that the Antarctic ice sheet changes unexpectedly and much more dynamically than previously expected. Despite available estimates on ice dynamic rates obtained from radar altimetry, information about ice sheet changes due to changes in the ice dynamics are still limited, especially in East Antarctica. Without understanding ice dynamic rates, it is not possible to properly assess changes in ice sheet mass balance and surface elevation or to develop ice sheet models. In this study we investigate the possibility of estimating ice sheet changes due to ice dynamic rates by removing modelled rates of surface mass balance, firn compaction, and bedrock uplift from satellite altimetry and gravity observations. With similar rates of ice discharge acquired from two different satellite missions we show that it is possible to obtain an approximation of the rate of change due to ice dynamics by combining altimetry and gravity observations. Thus, surface elevation changes due to surface mass balance, firn compaction, and ice dynamic rates can be modelled and correlated with observed elevation changes from satellite altimetry.

  10. Improving base rate estimation of alcohol misuse in the military: a preliminary report.

    Science.gov (United States)

    Sheppard, Sean C; Forsyth, John P; Earleywine, Mitch; Hickling, Edward J; Lehrbach, Melissa P

    2013-11-01

    Stigma associated with behavioral health problems in the military pose challenges to accurate base rate estimations. Recent work has highlighted the importance of anonymous assessment methods, yet no study to date has assessed the ability of anonymous self-report measures to mitigate the impact of stigma on honest reporting. This study used the unmatched count technique (UCT), a form of randomized response techniques, to gain information about the accuracy of base rate estimates of alcohol misuse derived via anonymous assessment of Operation Enduring Freedom/Operation Iraqi Freedom active duty service members. A cross-sectional, convenience sample of 184 active-duty service members, recruited via online websites for military populations, provided data on two facets of alcohol misuse (drinking more than intended and feeling the need to reduce use) via traditional self-report and the UCT. The UCT revealed significantly higher rates relative to traditional anonymous assessment for both drinking more than intended (51.9% vs. 23.4%) and feeling the need to reduce use (39.3% vs. 18.2%). These data suggest that anonymity does not completely mitigate the impact of stigma on endorsing behavioral health concerns in the military. Our results, although preliminary, suggest that published rates of alcohol misuse in the military may underestimate the true rates of these concerns. The UCT has significant potential to improve base rate estimation of sensitive behaviors in the military.

  11. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  12. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  13. A new balance formula to estimate new particle formation rate: reevaluating the effect of coagulation scavenging

    Science.gov (United States)

    Cai, Runlong; Jiang, Jingkun

    2017-10-01

    A new balance formula to estimate new particle formation rate is proposed. It is derived from the aerosol general dynamic equation in the discrete form and then converted into an approximately continuous form for analyzing data from new particle formation (NPF) field campaigns. The new formula corrects the underestimation of the coagulation scavenging effect that occurred in the previously used formulae. It also clarifies the criteria for determining the upper size bound in measured aerosol size distributions for estimating new particle formation rate. An NPF field campaign was carried out from 7 March to 7 April 2016 in urban Beijing, and a diethylene glycol scanning mobility particle spectrometer equipped with a miniature cylindrical differential mobility analyzer was used to measure aerosol size distributions down to ˜ 1 nm. Eleven typical NPF events were observed during this period. Measured aerosol size distributions from 1 nm to 10 µm were used to test the new formula and the formulae widely used in the literature. The previously used formulae that perform well in a relatively clean atmosphere in which nucleation intensity is not strong were found to underestimate the comparatively high new particle formation rate in urban Beijing because of their underestimation or neglect of the coagulation scavenging effect. The coagulation sink term is the governing component of the estimated formation rate in the observed NPF events in Beijing, and coagulation among newly formed particles contributes a large fraction to the coagulation sink term. Previously reported formation rates in Beijing and in other locations with intense NPF events might be underestimated because the coagulation scavenging effect was not fully considered; e.g., estimated formation rates of 1.5 nm particles in this campaign using the new formula are 1.3-4.3 times those estimated using the formula neglecting coagulation among particles in the nucleation mode.

  14. A computational method for estimating the PCR duplication rate in DNA and RNA-seq experiments.

    Science.gov (United States)

    Bansal, Vikas

    2017-03-14

    PCR amplification is an important step in the preparation of DNA sequencing libraries prior to high-throughput sequencing. PCR amplification introduces redundant reads in the sequence data and estimating the PCR duplication rate is important to assess the frequency of such reads. Existing computational methods do not distinguish PCR duplicates from "natural" read duplicates that represent independent DNA fragments and therefore, over-estimate the PCR duplication rate for DNA-seq and RNA-seq experiments. In this paper, we present a computational method to estimate the average PCR duplication rate of high-throughput sequence datasets that accounts for natural read duplicates by leveraging heterozygous variants in an individual genome. Analysis of simulated data and exome sequence data from the 1000 Genomes project demonstrated that our method can accurately estimate the PCR duplication rate on paired-end as well as single-end read datasets which contain a high proportion of natural read duplicates. Further, analysis of exome datasets prepared using the Nextera library preparation method indicated that 45-50% of read duplicates correspond to natural read duplicates likely due to fragmentation bias. Finally, analysis of RNA-seq datasets from individuals in the 1000 Genomes project demonstrated that 70-95% of read duplicates observed in such datasets correspond to natural duplicates sampled from genes with high expression and identified outlier samples with a 2-fold greater PCR duplication rate than other samples. The method described here is a useful tool for estimating the PCR duplication rate of high-throughput sequence datasets and for assessing the fraction of read duplicates that correspond to natural read duplicates. An implementation of the method is available at https://github.com/vibansal/PCRduplicates .

  15. Estimating Heart Rate, Energy Expenditure, and Physical Performance With a Wrist Photoplethysmographic Device During Running.

    Science.gov (United States)

    Parak, Jakub; Uuskoski, Maria; Machek, Jan; Korhonen, Ilkka

    2017-07-25

    Wearable sensors enable long-term monitoring of health and wellbeing indicators. An objective evaluation of sensors' accuracy is important, especially for their use in health care. The aim of this study was to use a wrist-worn optical heart rate (OHR) device to estimate heart rate (HR), energy expenditure (EE), and maximal oxygen intake capacity (VO2Max) during running and to evaluate the accuracy of the estimated parameters (HR, EE, and VO2Max) against golden reference methods. A total of 24 healthy volunteers, of whom 11 were female, with a mean age of 36.2 years (SD 8.2 years) participated in a submaximal self-paced outdoor running test and maximal voluntary exercise test in a sports laboratory. OHR was monitored with a PulseOn wrist-worn photoplethysmographic device and the running speed with a phone GPS sensor. A physiological model based on HR, running speed, and personal characteristics (age, gender, weight, and height) was used to estimate EE during the maximal voluntary exercise test and VO2Max during the submaximal outdoor running test. ECG-based HR and respiratory gas analysis based estimates were used as golden references. OHR was able to measure HR during running with a 1.9% mean absolute percentage error (MAPE). VO2Max estimated during the submaximal outdoor running test was closely similar to the sports laboratory estimate (MAPE 5.2%). The energy expenditure estimate (n=23) was quite accurate when HR was above the aerobic threshold (MAPE 6.7%), but MAPE increased to 16.5% during a lighter intensity of exercise. The results suggest that wrist-worn OHR may accurately estimate HR during running up to maximal HR. When combined with physiological modeling, wrist-worn OHR may be used for an estimation of EE, especially during higher intensity running, and VO2Max, even during submaximal self-paced outdoor recreational running.

  16. A Cross-Wavelet Transform Aided Rule Based Approach for Early Prediction of Lean Blow-out in Swirl-Stabilized Dump Combustor

    Directory of Open Access Journals (Sweden)

    Debangshu Dey

    2015-03-01

    Full Text Available Lean or ultralean combustion is one of the popular strategies to achieve very low emission levels. However, it is extremely susceptible to lean blow-out (LBO. The present work explores a Cross-wavelet transform (XWT aided rule based scheme for early prediction of lean blowout. XWT can be considered as an advancement of wavelet analysis which gives correlation between two waveforms in time-frequency space. In the present scheme a swirl-stabilized dump combustor is used as a laboratory-scale model of a generic gas turbine combustor with LPG as fuel. Various time series data of CH chemiluminescence signal are recorded for different flame conditions by varying equivalence ratio, flow rate and level of air-fuel premixing. Some features are extracted from the cross-wavelet spectrum of the recorded waveforms and a reference wave. The extracted features are observed to classify the flame condition into three major classes: near LBO, moderate and healthy. Moreover, a Rough Set based technique is also applied on the extracted features to generate a rule base so that it can be fed to a real time controller or expert system to take necessary control action to prevent LBO. Results show that the proposed methodology performs with an acceptable degree of accuracy.

  17. Reconstruction of orbital floor blow-out fractures with autogenous iliac crest bone: a retrospective study including maxillofacial and ophthalmology perspectives.

    Science.gov (United States)

    O'Connell, John Edward; Hartnett, Claire; Hickey-Dwyer, Marie; Kearns, Gerard J

    2015-03-01

    This is a 10-year retrospective study of patients with an isolated unilateral orbital floor fracture reconstructed with an autogenous iliac crest bone graft. The following inclusion criteria applied: isolated orbital floor fracture without involvement of the orbital rim or other craniofacial injuries, pre-/post-operative ophthalmological/orthoptic follow-up, pre-operative CT. Variables recorded were patient age and gender, aetiology of injury, time to surgery, follow-up period, surgical morbidity, diplopia pre- and post-operatively (Hess test), eyelid position, visual acuity, and the presence of en-/or exophthalmos (Hertel exophthalmometer). Twenty patients met the inclusion criteria. The mean age was 29 years. The mean follow up period was 26 months. No patient experienced significant donor site morbidity. There were no episodes of post-operative infection or graft extrusion. Three patients had diplopia in extremes of vision post-operatively, but no interference with activities of daily living. One patient had post-operative enophthalmos. Isolated orbital blow-out fractures may be safely and predictably reconstructed using autogenous iliac crest bone. The rate of complications in the group of patients studied was low. The value of pre- and post-operative ophthalmology consultation cannot be underestimated, and should be considered the standard of care in all patients with orbitozygomatic fractures, in particular those with blow-out fractures. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  18. A real-time maximum-likelihood heart-rate estimator for wearable textile sensors.

    Science.gov (United States)

    Cheng, Mu-Huo; Chen, Li-Chung; Hung, Ying-Che; Yang, Chang Ming

    2008-01-01

    This paper presents a real-time maximum-likelihood heart-rate estimator for ECG data measured via wearable textile sensors. The ECG signals measured from wearable dry electrodes are notorious for its susceptibility to interference from the respiration or the motion of wearing person such that the signal quality may degrade dramatically. To overcome these obstacles, in the proposed heart-rate estimator we first employ the subspace approach to remove the wandering baseline, then use a simple nonlinear absolute operation to reduce the high-frequency noise contamination, and finally apply the maximum likelihood estimation technique for estimating the interval of R-R peaks. A parameter derived from the byproduct of maximum likelihood estimation is also proposed as an indicator for signal quality. To achieve the goal of real-time, we develop a simple adaptive algorithm from the numerical power method to realize the subspace filter and apply the fast-Fourier transform (FFT) technique for realization of the correlation technique such that the whole estimator can be implemented in an FPGA system. Experiments are performed to demonstrate the viability of the proposed system.

  19. Estimating rates of local species extinction, colonization and turnover in animal communities

    Science.gov (United States)

    Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.

    1998-01-01

    Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.

  20. Estimation of construction and demolition waste using waste generation rates in Chennai, India.

    Science.gov (United States)

    Ram, V G; Kalidindi, Satyanarayana N

    2017-06-01

    A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.

  1. ESTIMATING SPECIATION AND EXTINCTION RATES FROM DIVERSITY DATA AND THE FOSSIL RECORD

    NARCIS (Netherlands)

    Etienne, Rampal S.; Apol, M. Emile F.

    Understanding the processes that underlie biodiversity requires insight into the evolutionary history of the taxa involved. Accurate estimation of speciation, extinction, and diversification rates is a prerequisite for gaining this insight. Here, we develop a stochastic birth-death model of

  2. Estimates of Annual Soil Loss Rates in the State of São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Grasiela de Oliveira Rodrigues Medeiros

    Full Text Available ABSTRACT: Soil is a natural resource that has been affected by human pressures beyond its renewal capacity. For this reason, large agricultural areas that were productive have been abandoned due to soil degradation, mainly caused by the erosion process. The objective of this study was to apply the Universal Soil Loss Equation to generate more recent estimates of soil loss rates for the state of São Paulo using a database with information from medium resolution (30 m. The results showed that many areas of the state have high (critical levels of soil degradation due to the predominance of consolidated human activities, especially in growing sugarcane and pasture use. The average estimated rate of soil loss is 30 Mg ha-1 yr-1 and 59 % of the area of the state (except for water bodies and urban areas had estimated rates above 12 Mg ha-1 yr-1, considered as the average tolerance limit in the literature. The average rates of soil loss in areas with annual agricultural crops, semi-perennial agricultural crops (sugarcane, and permanent agricultural crops were 118, 78, and 38 Mg ha-1 yr-1 respectively. The state of São Paulo requires attention to conservation of soil resources, since most soils led to estimates beyond the tolerance limit.

  3. Estimation of shutdown heat generation rates in GHARR-1 due to ...

    African Journals Online (AJOL)

    Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...

  4. [Quadratic formula: a new pediatric advance for glomerular filtration rate estimation].

    Science.gov (United States)

    Chehade, Hassib; Cachat, Françoise; Parvex, Paloma; Girardin, Eric

    2014-01-15

    A new formula for glomerular filtration rate estimation in pediatric population from 2 to 18 years has been developed by the University Unit of Pediatric Nephrology. This Quadratic formula, accessible online, allows pediatricians to adjust drug dosage and/or follow-up renal function more precisely and in an easy manner.

  5. National, regional, and worldwide estimates of stillbirth rates in 2015, with trends from 2000

    DEFF Research Database (Denmark)

    Blencowe, Hannah; Cousens, Simon; Jassir, Fiorella Bianchi

    2016-01-01

    BACKGROUND: Previous estimates have highlighted a large global burden of stillbirths, with an absence of reliable data from regions where most stillbirths occur. The Every Newborn Action Plan (ENAP) targets national stillbirth rates (SBRs) of 12 or fewer stillbirths per 1000 births by 2030. We es...

  6. Cross-contamination in the kitchen: estimation of transfer rates for cutting boards, hands and knives

    NARCIS (Netherlands)

    Asselt, van E.D.; Jong, de A.E.I.; Jonge, de R.; Nauta, M.J.

    2008-01-01

    Aims: To quantify cross-contamination in the home from chicken to readyto-eat salad. Methods and Results: Based on laboratory scenarios performed by de Jong et al. (2008), transfer rates were estimated for Campylobacter jejuni and Lactobacillus casei as a tracer organism. This study showed that

  7. An expert system for estimating production rates and costs for hardwood group-selection harvests

    Science.gov (United States)

    Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba

    2003-01-01

    As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...

  8. Relative efficiency of non-parametric error rate estimators in multi ...

    African Journals Online (AJOL)

    parametric error rate estimators in 2-, 3- and 5-group linear discriminant analysis. The simulation design took into account the number of variables (4, 6, 10, 18) together with the size sample n so that: n/p = 1.5, 2.5 and 5. Three values of the ...

  9. Sampling-based correlation estimation for distributed source coding under rate and complexity constraints.

    Science.gov (United States)

    Cheung, Ngai-Man; Wang, Huisheng; Ortega, Antonio

    2008-11-01

    In many practical distributed source coding (DSC) applications, correlation information has to be estimated at the encoder in order to determine the encoding rate. Coding efficiency depends strongly on the accuracy of this correlation estimation. While error in estimation is inevitable, the impact of estimation error on compression efficiency has not been sufficiently studied for the DSC problem. In this paper,we study correlation estimation subject to rate and complexity constraints, and its impact on coding efficiency in a DSC framework for practical distributed image and video applications. We focus on, in particular, applications where binary correlation models are exploited for Slepian-Wolf coding and sampling techniques are used to estimate the correlation, while extensions to other correlation models would also be briefly discussed. In the first part of this paper, we investigate the compression of binary data. We first propose a model to characterize the relationship between the number of samples used in estimation and the coding rate penalty, in the case of encoding of a single binary source. The model is then extended to scenarios where multiple binary sources are compressed, and based on the model we propose an algorithm to determine the number of samples allocated to different sources so that the overall rate penalty can be minimized, subject to a constraint on the total number of samples. The second part of this paper studies compression of continuous valued data. We propose a model-based estimation for the particular but important situations where binary bit-planes are extracted from a continuous-valued input source, and each bit-plane is compressed using DSC. The proposed model-based method first estimates the source and correlation noise models using continuous valued samples, and then uses the models to derive the bit-plane statistics analytically. We also extend the model-based estimation to the cases when bit-planes are extracted based on the

  10. A double-observer method to estimate detection rate during aerial waterfowl surveys

    Science.gov (United States)

    Koneff, M.D.; Royle, J. Andrew; Otto, M.C.; Wortham, J.S.; Bidwell, J.K.

    2008-01-01

    We evaluated double-observer methods for aerial surveys as a means to adjust counts of waterfowl for incomplete detection. We conducted our study in eastern Canada and the northeast United States utilizing 3 aerial-survey crews flying 3 different types of fixed-wing aircraft. We reconciled counts of front- and rear-seat observers immediately following an observation by the rear-seat observer (i.e., on-the-fly reconciliation). We evaluated 6 a priori models containing a combination of several factors thought to influence detection probability including observer, seat position, aircraft type, and group size. We analyzed data for American black ducks (Anas rubripes) and mallards (A. platyrhynchos), which are among the most abundant duck species in this region. The best-supported model for both black ducks and mallards included observer effects. Sample sizes of black ducks were sufficient to estimate observer-specific detection rates for each crew. Estimated detection rates for black ducks were 0.62 (SE = 0.10), 0.63 (SE = 0.06), and 0.74 (SE = 0.07) for pilot-observers, 0.61 (SE = 0.08), 0.62 (SE = 0.06), and 0.81 (SE = 0.07) for other front-seat observers, and 0.43 (SE = 0.05), 0.58 (SE = 0.06), and 0.73 (SE = 0.04) for rear-seat observers. For mallards, sample sizes were adequate to generate stable maximum-likelihood estimates of observer-specific detection rates for only one aerial crew. Estimated observer-specific detection rates for that crew were 0.84 (SE = 0.04) for the pilot-observer, 0.74 (SE = 0.05) for the other front-seat observer, and 0.47 (SE = 0.03) for the rear-seat observer. Estimated observer detection rates were confounded by the position of the seat occupied by an observer, because observers did not switch seats, and by land-cover because vegetation and landform varied among crew areas. Double-observer methods with on-the-fly reconciliation, although not without challenges, offer one viable option to account for detection bias in aerial waterfowl

  11. On the Recombination Rate Estimation in the Presence of Population Substructure.

    Directory of Open Access Journals (Sweden)

    Julian Hecker

    Full Text Available As recombination events are not uniformly distributed along the human genome, the estimation of fine-scale recombination maps, e.g. HapMap Project, has been one of the major research endeavors over the last couple of years. For simulation studies, these estimates provide realistic reference scenarios to design future study and to develop novel methodology. To achieve a feasible framework for the estimation of such recombination maps, existing methodology uses sample probabilities for a two-locus model with recombination, with recent advances allowing for computationally fast implementations. In this work, we extend the existing theoretical framework for the recombination rate estimation to the presence of population substructure. We show under which assumptions the existing methodology can still be applied. We illustrate our extension of the methodology by an extensive simulation study.

  12. Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.

    Science.gov (United States)

    Du, P; Walling, D E

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by

  13. A Bayesian Hierarchical Modeling Scheme for Estimating Erosion Rates Under Current Climate Conditions

    Science.gov (United States)

    Lowman, L.; Barros, A. P.

    2014-12-01

    Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.

  14. Estimation of the prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia

    Science.gov (United States)

    Meza, Benjamin P.L.; Lohrke, Britta; Wilkinson, Robert; Pitman, John P.; Shiraishi, Ray W.; Bock, Naomi; Lowrance, David W.; Kuehnert, Matthew J.; Mataranyika, Mary; Basavaraju, Sridhar V.

    2014-01-01

    Background Acute transfusion reactions are probably common in sub-Saharan Africa, but transfusion reaction surveillance systems have not been widely established. In 2008, the Blood Transfusion Service of Namibia implemented a national acute transfusion reaction surveillance system, but substantial under-reporting was suspected. We estimated the actual prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia. Methods The percentage of transfusion events resulting in a reported acute transfusion reaction was calculated. Actual percentage and rates of acute transfusion reactions per 1,000 transfused units were estimated by reviewing patients’ records from six hospitals, which transfuse >99% of all blood in Windhoek. Patients’ records for 1,162 transfusion events occurring between 1st January – 31st December 2011 were randomly selected. Clinical and demographic information were abstracted and Centers for Disease Control and Prevention National Healthcare Safety Network criteria were applied to categorize acute transfusion reactions1. Results From January 1 – December 31, 2011, there were 3,697 transfusion events (involving 10,338 blood units) in the selected hospitals. Eight (0.2%) acute transfusion reactions were reported to the surveillance system. Of the 1,162 transfusion events selected, medical records for 785 transfusion events were analysed, and 28 acute transfusion reactions were detected, of which only one had also been reported to the surveillance system. An estimated 3.4% (95% confidence interval [CI]: 2.3–4.4) of transfusion events in Windhoek resulted in an acute transfusion reaction, with an estimated rate of 11.5 (95% CI: 7.6–14.5) acute transfusion reactions per 1,000 transfused units. Conclusion The estimated actual rate of acute transfusion reactions is higher than the rate reported to the national haemovigilance system. Improved surveillance and interventions to reduce transfusion-related morbidity and mortality

  15. Do complex population histories drive higher estimates of substitution rate in phylogenetic reconstructions?

    Science.gov (United States)

    Ramakrishnan, Uma; Hadly, Elizabeth A

    2009-11-01

    Our curiosity about biodiversity compels us to reconstruct the evolutionary past of species. Molecular evolutionary theory now allows parameterization of mathematically sophisticated and detailed models of DNA evolution, which have resulted in a wealth of phylogenetic histories. But reconstructing how species and population histories have played out is critically dependent on the assumptions we make, such as the clock-like accumulation of genetic differences over time and the rate of accumulation of such differences. An important stumbling block in the reconstruction of evolutionary history has been the discordance in estimates of substitution rate between phylogenetic and pedigree-based studies. Ancient genetic data recovered directly from the past are intermediate in time scale between phylogenetics-based and pedigree-based calibrations of substitution rate. Recent analyses of such ancient genetic data suggest that substitution rates are closer to the higher, pedigree-based estimates. In this issue, Navascués & Emerson (2009) model genetic data from contemporary and ancient populations that deviate from a simple demographic history (including changes in population size and structure) using serial coalescent simulations. Furthermore, they show that when these data are used for calibration, we are likely to arrive at upwardly biased estimates of mutation rate.

  16. Estimation of sedimentation rate in the Middle and South Adriatic Sea using 137Cs.

    Science.gov (United States)

    Petrinec, Branko; Franic, Zdenko; Ilijanic, Nikolina; Miko, Slobodan; Strok, Marko; Smodis, Borut

    2012-08-01

    (137)Cs activity concentrations were studied in the sediment profiles collected at five locations in the Middle and South Adriatic. In the sediment profiles collected from the South Adriatic Pit, the deepest part of the Adriatic Sea, two (137)Cs peaks were identified. The peak in the deeper layer was attributed to the period of intensive atmospheric nuclear weapon tests (early 1960s), and the other to the Chernobyl nuclear accident (1986). Those peaks could be used to estimate sedimentation rates by relating them to the respective time periods. Grain-size analysis showed no changes in vertical distribution through the depth of the sediment profile, and these results indicate uniform sedimentation, as is expected in deeper marine environments. It was not possible to identify respective peaks on more shallow locations due to disturbance of the seabed either by trawlers (locations PalagruŽa and Jabuka) or by river sediment (location Albania). The highest sedimentation rates were found in Albania (∼4 mm y(-1)) and Jabuka (3.1 mm y(-1)). For PalagruŽa, the sedimentation rate was estimated to be 1.8 mm y(-1), similar to the South Adriatic Pit where the sedimentation rate was estimated to be 1.8±0.5 mm y(-1). Low sedimentation rates found for the Middle and South Adriatic Sea are consistent with previously reported results for the rest of the Mediterranean.

  17. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    Directory of Open Access Journals (Sweden)

    Timothy L. Vaughn

    2017-11-01

    Full Text Available Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA. On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates (“study on-site estimates (SOE” comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust, which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted by modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83–0.99, R2 = 0.89. Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER estimated by tracer measurements in this study is 17–73% higher than a prior national study by Marchese et al.

  18. Cuff-Free Blood Pressure Estimation Using Pulse Transit Time and Heart Rate.

    Science.gov (United States)

    Wang, Ruiping; Jia, Wenyan; Mao, Zhi-Hong; Sclabassi, Robert J; Sun, Mingui

    2014-10-01

    It has been reported that the pulse transit time (PTT), the interval between the peak of the R-wave in electrocardiogram (ECG) and the fingertip photoplethysmogram (PPG), is related to arterial stiffness, and can be used to estimate the systolic blood pressure (SBP) and diastolic blood pressure (DBP). This phenomenon has been used as the basis to design portable systems for continuously cuff-less blood pressure measurement, benefiting numerous people with heart conditions. However, the PTT-based blood pressure estimation may not be sufficiently accurate because the regulation of blood pressure within the human body is a complex, multivariate physiological process. Considering the negative feedback mechanism in the blood pressure control, we introduce the heart rate (HR) and the blood pressure estimate in the previous step to obtain the current estimate. We validate this method using a clinical database. Our results show that the PTT, HR and previous estimate reduce the estimated error significantly when compared to the conventional PTT estimation approach (p<0.05).

  19. Fast and Robust Real-Time Estimation of Respiratory Rate from Photoplethysmography

    Directory of Open Access Journals (Sweden)

    Hodam Kim

    2016-09-01

    Full Text Available Respiratory rate (RR is a useful vital sign that can not only provide auxiliary information on physiological changes within the human body, but also indicate early symptoms of various diseases. Recently, methods for the estimation of RR from photoplethysmography (PPG have attracted increased interest, because PPG can be readily recorded using wearable sensors such as smart watches and smart bands. In the present study, we propose a new method for the fast and robust real-time estimation of RR using an adaptive infinite impulse response (IIR notch filter, which has not yet been applied to the PPG-based estimation of RR. In our offline simulation study, the performance of the proposed method was compared to that of recently developed RR estimation methods called an adaptive lattice-type RR estimator and a Smart Fusion. The results of the simulation study show that the proposed method could not only estimate RR more quickly and more accurately than the conventional methods, but also is most suitable for online RR monitoring systems, as it does not use any overlapping moving windows that require increased computational costs. In order to demonstrate the practical applicability of the proposed method, an online RR estimation system was implemented.

  20. Propargyl Recombination: Estimation of the High Temperature, Low Pressure Rate Constant from Flame Measurements

    DEFF Research Database (Denmark)

    Rasmussen, Christian Lund; Skjøth-Rasmussen, Martin Skov; Jensen, Anker

    2005-01-01

    The most important cyclization reaction in hydrocarbon flames is probably recombination of propargyl radicals. This reaction may, depending on reaction conditions, form benzene, phenyl or fulvene, as well as a range of linear products. A number of rate measurements have been reported for C3H3 + C3H......3 at temperatures below 1000 K, while data at high temperature and low pressure only can be obtained from flames. In the present work, an estimate of the rate constant for the reaction at 1400 +/- 50 K and 20 Torr is obtained from analysis of the fuel-rich acetylene flame of Westmoreland, Howard......, and Longwell. Based on an accurate modeling of the flame structure, in particular the concentration profile of propargyl, we estimate the rate constant by fitting the kinetic modeling predictions to the measured benzene and phenyl profiles. The best agreement is obtained with k = 1.3 x 10(12) cm(3)/mol...

  1. Something from nothing: Estimating consumption rates using propensity scores, with application to emissions reduction policies.

    Science.gov (United States)

    Bardsley, Nicholas; Büchs, Milena; Schnepf, Sylke V

    2017-01-01

    Consumption surveys often record zero purchases of a good because of a short observation window. Measures of distribution are then precluded and only mean consumption rates can be inferred. We show that Propensity Score Matching can be applied to recover the distribution of consumption rates. We demonstrate the method using the UK National Travel Survey, in which c.40% of motorist households purchase no fuel. Estimated consumption rates are plausible judging by households' annual mileages, and highly skewed. We apply the same approach to estimate CO2 emissions and outcomes of a carbon cap or tax. Reliance on means apparently distorts analysis of such policies because of skewness of the underlying distributions. The regressiveness of a simple tax or cap is overstated, and redistributive features of a revenue-neutral policy are understated.

  2. Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro

    Directory of Open Access Journals (Sweden)

    Guohu Feng

    2013-02-01

    Full Text Available An actual account of the angle random walk (ARW coefficients of gyros in the constant rate biased rate ring laser gyro (RLG inertial navigation system (INS is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS.

  3. Application on forced traction test in surgeries for orbital blowout fracture

    Directory of Open Access Journals (Sweden)

    Bao-Hong Han

    2014-05-01

    Full Text Available AIM: To discuss the application of forced traction test in surgeries for orbital blowout fracture.METHODS: The clinical data of 28 patients with reconstructive surgeries for orbital fracture were retrospectively analyzed. All patients were treated with forced traction test before/in/after operation. The eyeball movement and diplopia were examined and recorded pre-operation, 3 and 6mo after operation, respectively.RESULTS: Diplopia was improved in all 28 cases with forced traction test. There was significant difference between preoperative and post-operative diplopia at 3 and 6mo(PCONCLUSION: Forced traction test not only have a certain clinical significance in diagnosis of orbital blowout fracture, it is also an effective method in improving diplopia before/in/after operation.

  4. The use of pulsed high-speed liquid jet for putting out gas blow-out

    Directory of Open Access Journals (Sweden)

    A Semko

    2016-10-01

    Full Text Available The experimental analysis of putting out a gas blow-out with the help of pulse liquid flow with high velocity, which generates by powder pulse water-cannon are carried out. The flow velocity resides in range from 300 to 600 m/s in experiments depends on charge energy. Velocity of the flow head right near the gas flame determined with the help of laser contactless measuring instrument of velocity. Photography of flow was carried out. According to the preliminary test results the hydrodynamic parameters of powder pulse water-cannon for obtaining liquid flow with depend velocity are calculated. It is shown, that around the liquid flow of high velocity in air produced fine water spray with high velocity in large cross section area that effective knock down the gas blow-out at the distance 5-20 m from installation.

  5. Clinical characteristics and treatment of blow-out fracture accompanied by canalicular laceration.

    Science.gov (United States)

    Lee, Hwa; Ahn, Jaemoon; Lee, Tae Eun; Lee, Jong Mi; Shin, Hyungho; Chi, Mijung; Park, Minsoo; Baek, Sehyun

    2012-09-01

    Blow-out fracture and canalicular laceration can occur simultaneously as a result of the same trauma. Despite its importance, little research has been conducted to identify clinical characteristics or surgical techniques for repair of a blow-out fracture accompanied by canalicular laceration. The aim of this study was to evaluate the clinical characteristics, the surgical approach, and the outcomes. Thirty-four eyes of 34 patients who underwent simultaneous repair of canalicular laceration using silicone tube intubation and reconstruction of blow-out fracture were included. Medical records were retrospectively reviewed for patient demographics, nature of injury, affected canaliculus, location, and severity of blow-out fracture, associated facial bone fracture, ophthalmic diagnosis, length of follow-up period, and surgical outcome. Mean patient age was 40.0 years (range, 17-71 y). The mean follow-up was 7.3 months. Fist to the orbital area (10 patients, 29.4%) was the most common cause. There were 24 lower canalicular lacerations (70.6%), 6 upper canalicular lacerations (17.6%), and 4 upper and lower canalicular lacerations (11.8%). Isolated medial wall fractures were most common (area A4: 20/34, 58.8%). Fractures involving both the floor and medial wall and maxillo-ethmoidal strut (areas A1, A2, A3, and A4) were the second most common (6/34, 17.6%), and floor and medial wall with intact strut (areas A1, A2, and A4) were injured in 6 patients (17.6%). Pure inferior wall fractures were least frequent (areas A1 and A2: 2/34, 5.9%). The severity of the fracture was severe in most patients except for 1 linear fracture with tissue entrapment and 1 moderate medial wall fracture (32/34, 94.1%). There was lid laceration in 20 patients (58.8%). Nasal bone fracture (5/34, 14.7%) was the most common facial bone fracture. Tubes were removed at a mean of 3.3 months (range, 3-4 mo). In total, 31 patients (91.2%) achieved complete success in canalicular laceration and blow-out

  6. A single-blood-sample method using inulin for estimating feline glomerular filtration rate.

    Science.gov (United States)

    Katayama, M; Saito, J; Katayama, R; Yamagishi, N; Murayama, I; Miyano, A; Furuhama, K

    2013-01-01

    Application of a multisample method using inulin to estimate glomerular filtration rate (GFR) in cats is cumbersome. To establish a simplified procedure to estimate GFR in cats, a single-blood-sample method using inulin was compared with a conventional 3-sample method. Nine cats including 6 clinically healthy cats and 3 cats with spontaneous chronic kidney disease. Retrospective study. Inulin was administered as an intravenous bolus at 50 mg/kg to cats, and blood was collected at 60, 90, and 120 minutes later for the 3-sample method. Serum inulin concentrations were colorimetrically determined by an autoanalyzer method. The GFR in the single-blood-sample method was calculated from the dose injected, serum concentration, sampling time, and estimated volume of distribution on the basis of the data of the 3-sample method. An excellent correlation was observed (r = 0.99, P = .0001) between GFR values estimated by the single-blood-sample and 3-sample methods. The single-blood-sample method using inulin provides a practicable and ethical alternative for estimating glomerular filtration rate in cats. Copyright © 2012 by the American College of Veterinary Internal Medicine.

  7. Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability

    Science.gov (United States)

    Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko

    In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.

  8. Feasibility of pulse wave velocity estimation from low frame rate US sequences in vivo

    Science.gov (United States)

    Zontak, Maria; Bruce, Matthew; Hippke, Michelle; Schwartz, Alan; O'Donnell, Matthew

    2017-03-01

    The pulse wave velocity (PWV) is considered one of the most important clinical parameters to evaluate CV risk, vascular adaptation, etc. There has been substantial work attempting to measure the PWV in peripheral vessels using ultrasound (US). This paper presents a fully automatic algorithm for PWV estimation from the human carotid using US sequences acquired with a Logic E9 scanner (modified for RF data capture) and a 9L probe. Our algorithm samples the pressure wave in time by tracking wall displacements over the sequence, and estimates the PWV by calculating the temporal shift between two sampled waves at two distinct locations. Several recent studies have utilized similar ideas along with speckle tracking tools and high frame rate (above 1 KHz) sequences to estimate the PWV. To explore PWV estimation in a more typical clinical setting, we used focused-beam scanning, which yields relatively low frame rates and small fields of view (e.g., 200 Hz for 16.7 mm filed of view). For our application, a 200 Hz frame rate is low. In particular, the sub-frame temporal accuracy required for PWV estimation between locations 16.7 mm apart, ranges from 0.82 of a frame for 4m/s, to 0.33 for 10m/s. When the distance is further reduced (to 0.28 mm between two beams), the sub-frame precision is in parts per thousand (ppt) of the frame (5 ppt for 10m/s). As such, the contributions of our algorithm and this paper are: 1. Ability to work with low frame-rate ( 200Hz) and decreased lateral field of view. 2. Fully automatic segmentation of the wall intima (using raw RF images). 3. Collaborative Speckle Tracking of 2D axial and lateral carotid wall motion. 4. Outlier robust PWV calculation from multiple votes using RANSAC. 5. Algorithm evaluation on volunteers of different ages and health conditions.

  9. Blowout Jets: Hinode X-Ray Jets that Don't Fit the Standard Model

    Science.gov (United States)

    Moore, Ronald L.; Cirtain, Jonathan W.; Sterling, Alphonse C.; Falconer, David A.

    2010-01-01

    Nearly half of all H-alpha macrospicules in polar coronal holes appear to be miniature filament eruptions. This suggests that there is a large class of X-ray jets in which the jet-base magnetic arcade undergoes a blowout eruption as in a CME, instead of remaining static as in most solar X-ray jets, the standard jets that fit the model advocated by Shibata. Along with a cartoon depicting the standard model, we present a cartoon depicting the signatures expected of blowout jets in coronal X-ray images. From Hinode/XRT movies and STEREO/EUVI snapshots in polar coronal holes, we present examples of (1) X-ray jets that fit the standard model, and (2) X-ray jets that do not fit the standard model but do have features appropriate for blowout jets. These features are (1) a flare arcade inside the jet-base arcade in addition to the small flare arcade (bright point) outside that standard jets have, (2) a filament of cool (T is approximately 80,000K) plasma that erupts from the core of the jetbase arcade, and (3) an extra jet strand that should not be made by the reconnection for standard jets but could be made by reconnection between the ambient unipolar open field and the opposite-polarity leg of the filament-carrying flux-rope core field of the erupting jet-base arcade. We therefore infer that these non-standard jets are blowout jets, jets made by miniature versions of the sheared-core-arcade eruptions that make CMEs

  10. Kinetic magnetic resonance imaging of orbital blowout fracture with restricted ocular movement

    Energy Technology Data Exchange (ETDEWEB)

    Totsuka, Nobuyoshi; Koide, Ryouhei; Inatomi, Makoto; Fukado, Yoshinao (Showa Univ., Tokyo (Japan). School of Medicine); Hisamatsu, Katsuji

    1992-03-01

    We analyzed the mechanism of gaze limitation in blowout fracture in 19 patients by means of kinetic magnetic resonance imaging (MRI). We could identify herniation of fat tissue and rectus muscles with connective tissue septa in 11 eyes. Depressed rectus muscles were surrounded by fat tissue. In no instance was the rectus muscle actually incarcerated. Entrapped connective tissue septa seemed to prevent movement of affected rectus muscle. We occasionally observed incarcerated connective tissue septa to restrict motility of the optic nerve. (author).

  11. Fine wakefield structure in the blowout regime of plasma wakefield accelerators

    Directory of Open Access Journals (Sweden)

    K. V. Lotov

    2003-06-01

    Full Text Available For simulations of plasma wakefield acceleration (PWFA and similar problems, we developed two-dimensional fully electromagnetic fully relativistic hybrid code LCODE. The code is very fast due to explicit use of several simplifying assumptions (quasistatic approximation, ultrarelativistic beam, and the symmetry. With LCODE, we make high-resolution simulations of the blowout regime of PWFA and study the temperature effect on the amplitude of the accelerating field spike.

  12. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the ex...... measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles....

  13. Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements

    Directory of Open Access Journals (Sweden)

    Pedro Mello Paiva

    2016-12-01

    Full Text Available This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers simplification of the spill on the surface, even in the well blowout scenario. Efforts to better understand the oil and gas behavior in the water column and three-dimensional modeling of the trajectory gained strength after the Deepwater Horizon spill in 2010 in the Gulf of Mexico. The data collected and the observations made during the accident were widely used for adjustment of the models, incorporating various factors related to hydrodynamic forcing and weathering processes to which the hydrocarbons are subjected during subsurface leaks. The difficulties show to be even more challenging in the case of blowouts in deep waters, where the uncertainties are still larger. The studies addressed different variables to make adjustments of oil and gas dispersion models along the upward trajectory. Factors that exert strong influences include: speed of the subsurface currents;  gas separation from the main plume; hydrate formation, dissolution of oil and gas droplets; variations in droplet diameter; intrusion of the droplets at intermediate depths; biodegradation; and appropriate parametrization of the density, salinity and temperature profiles of water through the column.

  14. Stability and Blowout Behavior of Jet Flames in Oblique Air Flows

    Directory of Open Access Journals (Sweden)

    Jonathan N. Gomes

    2012-01-01

    Full Text Available The stability limits of a jet flame can play an important role in the design of burners and combustors. This study details an experiment conducted to determine the liftoff and blowout velocities of oblique-angle methane jet flames under various air coflow velocities. A nozzle was mounted on a telescoping boom to allow for an adjustable burner angle relative to a vertical coflow. Twenty-four flow configurations were established using six burner nozzle angles and four coflow velocities. Measurements of the fuel supply velocity during liftoff and blowout were compared against two parameters: nozzle angle and coflow velocity. The resulting correlations indicated that flames at more oblique angles have a greater upper stability limit and were more resistant to changes in coflow velocity. This behavior occurs due to a lower effective coflow velocity at angles more oblique to the coflow direction. Additionally, stability limits were determined for flames in crossflow and mild counterflow configurations, and a relationship between the liftoff and blowout velocities was observed. For flames in crossflow and counterflow, the stability limits are higher. Further studies may include more angle and coflow combinations, as well as the effect of diluents or different fuel types.

  15. Diplopia and ocular motility in orbital blow-out fractures: 10-year retrospective study.

    Science.gov (United States)

    Alhamdani, Faaiz; Durham, Justin; Greenwood, Mark; Corbett, Ian

    2015-09-01

    To investigate diplopia (binocular single vision [BSV] test) and ocular motility (uniocular field of fixation [UFOF] test) characteristics in blow-out fractures of the orbit and their value in fracture management. Patients with isolated blow-out fractures treated from 2000 to 2010 were included. BSV scores were stratified into three categories: low BSV category (0-60); moderate BSV category (61-80), and high BSV category (81-100). UFOF scores were also divided into three categories: low score (60-240), moderate score (241-270), and high score (271-365) categories. A total of 183 patients (106 surgically and 77 conservatively managed) met the inclusion criteria. There was no significant improvement in BSV postoperatively in surgically managed patients with preoperatively high BSV, whereas there was significant improvement (p blow-out fracture cases with BSV score >80% for correction of diplopia alone. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Droplet and bubble formation of combined oil and gas releases in subsea blowouts.

    Science.gov (United States)

    Zhao, Lin; Boufadel, Michel C; King, Thomas; Robinson, Brian; Gao, Feng; Socolofsky, Scott A; Lee, Kenneth

    2017-07-15

    Underwater blowouts from gas and oil operations often involve the simultaneous release of oil and gas. Presence of gas bubbles in jets/plumes could greatly influence oil droplet formation. With the aim of understanding and quantifying the droplet formation from Deepwater Horizon blowout (DWH) we developed a new formulation for gas-oil interaction with jets/plumes. We used the jet-droplet formation model VDROP-J with the new module and the updated model was validated against laboratory and field experimental data. Application to DWH revealed that, in the absence of dispersant, gas input resulted in a reduction of d50 by up to 1.5mm, and maximum impact occurred at intermediate gas fractions (30-50%). In the presence of dispersant, reduction in d50 due to bubbles was small because of the promoted small sizes of both bubbles and droplets by surfactants. The new development could largely enhance the prediction and response to oil and gas blowouts. Copyright © 2017. Published by Elsevier Ltd.

  17. Prevalence of Diplopia and Extraocular Movement Limitation according to the Location of Isolated Pure Blowout Fractures

    Directory of Open Access Journals (Sweden)

    Min Seok Park

    2012-05-01

    Full Text Available Background Isolated pure blowout fractures are clinically important because they are themain cause of serious complications such as diplopia and limitation of extraocular movement.Many reports have described the incidence of blowout fractures associated with diplopiaand limitation of extraocular movement; however, no studies have statistically analyzedthis relationship. The purpose of this study was to demonstrate the correlation betweenthe location of isolated pure blowout fractures and orbital symptoms such as diplopia andlimitation of extraocular movement.Methods We enrolled a total of 354 patients who had been diagnosed with isolated pureblowout fractures, based on computed tomography, from June 2008 to November 2011.Medical records were reviewed, and the prevalence of extraocular movement limitations anddiplopia were determined.Results There were 14 patients with extraocular movement limitation and 58 patientscomplained of diplopia. Extraocular movement limitation was associated with the followingfindings, in decreasing order of frequency: floor fracture (7.1%, extended fracture (3.6%,and medial wall (1.7%. However, there was no significant difference among the types offractures (P=0.60. Diplopia was more commonly associated with floor fractures (21.4%and extended type fractures (23.6% than medial wall fractures (10.4%. The difference wasstatistically significant (Bonferroni-corrected chi-squared test P<0.016.Conclusions Data indicate that extended type fractures and orbital floor fractures tend tocause diplopia more commonly than medial wall fractures. However, extraocular movementlimitation was not found to be dependent on the location of the orbital wall fracture.

  18. Evaluation of Estimating Missed Answers in Conners Adult ADHD Rating Scale (Screening Version

    Directory of Open Access Journals (Sweden)

    Vahid Abootalebi

    2010-08-01

    Full Text Available "n Objective: Conners Adult ADHD Rating Scale (CAARS is among the valid questionnaires for evaluating Attention-Deficit/Hyperactivity Disorder in adults. The aim of this paper is to evaluate the validity of the estimation of missed answers in scoring the screening version of the Conners questionnaire, and to extract its principal components. "n Method: This study was performed on 400 participants. Answer estimation was calculated for each question (assuming the answer was missed, and then a Kruskal-Wallis test was performed to evaluate the difference between the original answer and its estimation. In the next step, principal components of the questionnaire were extracted by means of Principal Component Analysis (PCA. Finally the evaluation of differences in the whole groups was provided using the Multiple Comparison Procedure (MCP. Results: Findings indicated that a significant difference existed between the original and estimated answers for some particular questions. However, the results of MCP showed that this estimation, when evaluated in the whole group, did not show a significant difference with the original value in neither of the questionnaire subscales. The results of PCA revealed that there are eight principal components in the CAARS questionnaire. Conclusion: The obtained results can emphasize the fact that this questionnaire is mainly designed for screening purposes, and this estimation does not change the results of groups when a question is missed randomly. Notwithstanding this finding, more considerations should be paid when the missed question is a critical one.

  19. Rate variation and estimation of divergence times using strict and relaxed clocks

    Directory of Open Access Journals (Sweden)

    Yang Ziheng

    2011-09-01

    Full Text Available Abstract Background Understanding causes of biological diversity may be greatly enhanced by knowledge of divergence times. Strict and relaxed clock models are used in Bayesian estimation of divergence times. We examined whether: i strict clock models are generally more appropriate in shallow phylogenies where rate variation is expected to be low, ii the likelihood ratio test of the clock (LRT reliably informs which model is appropriate for dating divergence times. Strict and relaxed models were used to analyse sequences simulated under different levels of rate variation. Published shallow phylogenies (Black bass, Primate-sucking lice, Podarcis lizards, Gallotiinae lizards, and Caprinae mammals were also analysed to determine natural levels of rate variation relative to the performance of the different models. Results Strict clock analyses performed well on data simulated under the independent rates model when the standard deviation of log rate on branches, σ, was low (≤0.1, but were inappropriate when σ>0.1 (95% of rates fall within 0.0082-0.0121 subs/site/Ma when σ = 0.1, for a mean rate of 0.01. The independent rates relaxed clock model performed well at all levels of rate variation, although posterior intervals on times were significantly wider than for the strict clock. The strict clock is therefore superior when rate variation is low. The performance of a correlated rates relaxed clock model was similar to the strict clock. Increased numbers of independent loci led to slightly narrower posteriors under the relaxed clock while older root ages provided proportionately narrower posteriors. The LRT had low power for σ = 0.01-0.1, but high power for σ = 0.5-2.0. Posterior means of σ2 were useful for assessing rate variation in published datasets. Estimates of natural levels of rate variation ranged from 0.05-3.38 for different partitions. Differences in divergence times between relaxed and strict clock analyses were greater in two

  20. A macroeconomic approach to estimating effective tax rates in South Africa

    Directory of Open Access Journals (Sweden)

    HA Amusa

    2015-07-01

    Full Text Available Using data contained in South Africa's national accounts and revenue statistics, this paper constructs time-series of effective tax rates for consumption, capital income, and labour income. The macroeconomic approach allows for a detailed breakdown of tax revenue accruing to general government and the corresponding aggregate tax bases. The methodology used also yields effective rate estimates that can be considered as being consistent with tax distortions faced by a representative economic agent within a general equilibrium framework. Correlation analysis reveals that savings (as a percentage of GDP is negatively correlated with both capital income and labour income tax rates. Investment (as a percentage of GDP is positively correlated with the capital income tax rate, an outcome suggestive of the direct relationship between volatile capital inflows into South Africa and capital tax revenue

  1. Estimating Denitrification Rates in the East/Japan Sea Using Extended Optimum Multi-Parameter Analysis

    Directory of Open Access Journals (Sweden)

    Il-Nam Kim

    2015-01-01

    Full Text Available Denitrification rates in the East/Japan Sea (EJS were examined with extended Optimum Multi-Parameter (eOMP analysis. The potential denitrification locations expected from the eOMP analysis occurred only in the Ulleung Basin (UB and near the Tatar Strait (TtS of the Eastern Japan Basin (EJB. Estimated denitrification rates were ~0.3 - 3 and ~4 - 11 μmolμmol N m-2 d-1 in the UB and in the EJB, respectively. These rates agree with previous published results. The _ rates were lower than reported for other marginal seas. However, considering the rapid EJS response to climate change, we predict that denitrification may be enhanced in the near future.

  2. On the Biomass Specific Growth Rates Estimation for Anaerobic Digestion using Differential Algebraic Techniques

    Directory of Open Access Journals (Sweden)

    Sette Diop

    2009-10-01

    Full Text Available The paper deals with identifiability and observability of anaerobic digestion (AD processes. In such kind of processes, generally carried out in continuously stirred tank bioreactors, the organic matter is depolluted by microorganisms into biogas and compost in the absence of oxygen. The biogas is an additional energy source, which can replace fossil fuel sources. The differential algebraic approach of general observation problems has been applied to investigate the identification and observation of a simple AD model. The major discovery is that the biomass specific growth rate can be stably estimated from easily measured quantities: the dilution rate and the biogas flow rate. Next if the yield coefficients are assumed known then, of course, the biomass concentration is observable. Unfortunately, even under the latter strongest assumption the substrate concentration is not observable. This concentration becomes observable if an additional model, say the Monod model, is assumed for the specific growth rate. Illustrative simulations are presented.

  3. Evaluation and Estimation of the Provincial Infant Mortality Rate in China's Sixth Census.

    Science.gov (United States)

    Hu, Song Bo; Wang, Fang; Yu, Chuan Hua

    2015-06-01

    To assess the data quality and estimate the provincial infant mortality rate (1q0) from China's sixth census. A log-quadratic model is applied to under-fifteen data. We analyze and compare the average relative errors (AREs) for 1q0 between the estimated and reported values using the leave-one-out cross-validation method. For the sixth census, the AREs are more than 100% for almost all provinces. The estimated average 1q0 level for 31 provinces is 12.3‰ for males and 10.7‰ for females. The data for the provincial 1q0 from China's sixth census have a serious data quality problem. The actual levels of 1q0 for each province are significantly higher than the reported values. Copyright © 2015 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  4. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  5. A comparative review of estimates of the proportion unchanged genes and the false discovery rate

    Directory of Open Access Journals (Sweden)

    Broberg Per

    2005-08-01

    Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information

  6. Recursive Estimation for Dynamical Systems with Different Delay Rates Sensor Network and Autocorrelated Process Noises

    Directory of Open Access Journals (Sweden)

    Jianxin Feng

    2014-01-01

    Full Text Available The recursive estimation problem is studied for a class of uncertain dynamical systems with different delay rates sensor network and autocorrelated process noises. The process noises are assumed to be autocorrelated across time and the autocorrelation property is described by the covariances between different time instants. The system model under consideration is subject to multiplicative noises or stochastic uncertainties. The sensor delay phenomenon occurs in a random way and each sensor in the sensor network has an individual delay rate which is characterized by a binary switching sequence obeying a conditional probability distribution. By using the orthogonal projection theorem and an innovation analysis approach, the desired recursive robust estimators including recursive robust filter, predictor, and smoother are obtained. Simulation results are provided to demonstrate the effectiveness of the proposed approaches.

  7. A physiological counterpoint to mechanistic estimates of "internal power" during cycling at different pedal rates

    DEFF Research Database (Denmark)

    Hansen, Ernst Albin; Jørgensen, Lars Vincents; Sjøgaard, Gisela

    2004-01-01

    metabolic variables and to perform a physiological evaluation of five different kinematic models for calculating IP in cycling. Results showed that IP was statistically different between the kinematic models applied. IP based on metabolic variables (IP(met)) was 15, 41, and 91 W at 61, 88, and 115 rpm......, respectively, being remarkably close to the kinematic estimate of one model (IP(Willems-COM): 14, 43, and 95 W) and reasonably close to another kinematic estimate (IP(Winter): 8, 29, and 81 W). For all kinematic models there was no significant effect of performing 3-D versus 2-D analyses. IP increased...... significantly with pedal rate - leg movements accounting for the largest fraction. Further, external power (EP) affected IP significantly such that IP was larger at moderate than at low EP at the majority of the pedal rates applied but on average this difference was only 8%....

  8. Swimmer detection and pose estimation for continuous stroke-rate determination

    Science.gov (United States)

    Zecha, Dan; Greif, Thomas; Lienhart, Rainer

    2012-02-01

    In this work we propose a novel approach to automatically detect a swimmer and estimate his/her pose continuously in order to derive an estimate of his/her stroke rate given that we observe the swimmer from the side. We divide a swimming cycle of each stroke into several intervals. Each interval represents a pose of the stroke. We use specifically trained object detectors to detect each pose of a stroke within a video and count the number of occurrences per time unit of the most distinctive poses (so-called key poses) of a stroke to continuously infer the stroke rate. We extensively evaluate the overall performance and the influence of the selected poses for all swimming styles on a data set consisting of a variety of swimmers.

  9. Estimation of heart rate from foot worn photoplethysmography sensors during fast bike exercise.

    Science.gov (United States)

    Jarchi, Delaram; Casson, Alexander J

    2016-08-01

    This paper presents a new method for estimating the average heart rate from a foot/ankle worn photoplethysmography (PPG) sensor during fast bike activity. Placing the PPG sensor on the lower half of the body allows more energy to be collected from energy harvesting in order to give a power autonomous sensor node, but comes at the cost of introducing significant motion interference into the PPG trace. We present a normalised least mean square adaptive filter and short-time Fourier transform based algorithm for estimating heart rate in the presence of this motion contamination. Results from 8 subjects show the new algorithm has an average error of 9 beats-per-minute when compared to an ECG gold standard.

  10. Convex Solution to a Joint Attitude and Spin-Rate Estimation Problem

    Science.gov (United States)

    Saunderson, James; Parrilo, Pablo A.; Willsky, Alan S.

    2016-01-01

    We consider the problem of jointly estimating the attitude and spin-rate of a spinning spacecraft. Psiaki (J. Astronautical Sci., 57(1-2):73--92, 2009) has formulated a family of optimization problems that generalize the classical least-squares attitude estimation problem, known as Wahba's problem, to the case of a spinning spacecraft. If the rotation axis is fixed and known, but the spin-rate is unknown (such as for nutation-damped spin-stabilized spacecraft) we show that Psiaki's problem can be reformulated exactly as a type of tractable convex optimization problem called a semidefinite optimization problem. This reformulation allows us to globally solve the problem using standard numerical routines for semidefinite optimization. It also provides a natural semidefinite relaxation-based approach to more complicated variations on the problem.

  11. An approach for estimating time-variable rates from geodetic time series

    Science.gov (United States)

    Didova, Olga; Gunter, Brian; Riva, Riccardo; Klees, Roland; Roese-Koerner, Lutz

    2016-11-01

    There has been considerable research in the literature focused on computing and forecasting sea-level changes in terms of constant trends or rates. The Antarctic ice sheet is one of the main contributors to sea-level change with highly uncertain rates of glacial thinning and accumulation. Geodetic observing systems such as the Gravity Recovery and Climate Experiment (GRACE) and the Global Positioning System (GPS) are routinely used to estimate these trends. In an effort to improve the accuracy and reliability of these trends, this study investigates a technique that allows the estimated rates, along with co-estimated seasonal components, to vary in time. For this, state space models are defined and then solved by a Kalman filter (KF). The reliable estimation of noise parameters is one of the main problems encountered when using a KF approach, which is solved by numerically optimizing likelihood. Since the optimization problem is non-convex, it is challenging to find an optimal solution. To address this issue, we limited the parameter search space using classical least-squares adjustment (LSA). In this context, we also tested the usage of inequality constraints by directly verifying whether they are supported by the data. The suggested technique for time-series analysis is expanded to classify and handle time-correlated observational noise within the state space framework. The performance of the method is demonstrated using GRACE and GPS data at the CAS1 station located in East Antarctica and compared to commonly used LSA. The results suggest that the outlined technique allows for more reliable trend estimates, as well as for more physically valuable interpretations, while validating independent observing systems.

  12. Non-contact dual pulse Doppler system based respiratory and heart rates estimation for CHF patients.

    Science.gov (United States)

    Tran, Vinh Phuc; Ali Al-Jumaily, Adel

    2015-01-01

    Long term continuous patient monitoring is required in many health systems for monitoring and analytical diagnosing purposes. Most of monitoring systems had shortcomings related to their functionality or patient comfortably. Non-contact continuous monitoring systems have been developed to address some of these shortcomings. One of such systems is non-contact physiological vital signs assessments for chronic heart failure (CHF) patients. This paper presents a novel automated estimation algorithm for the non-contact physiological vital signs assessments for CHF patients based on a patented novel non-contact biomotion sensor. A database consists of twenty CHF patients with New York Heart Association (NYHA) heart failure Classification Class II & III, whose underwent full Polysomnography (PSG) analysis for the diagnosis of sleep apnea, disordered sleep, or both, were selected for the study. The patients mean age is 68.89 years, with mean body weight of 86.87 kg, mean BMI of 28.83 (obesity) and mean recorded sleep duration of 7.78 hours. The propose algorithm analyze the non-contact biomotion signals and estimate the patients' respiratory and heart rates. The outputs of the algorithm are compared with gold-standard PSG recordings. Across all twenty patients' recordings, the respiratory rate estimation median accuracy achieved 92.4689% with median error of ± 1.2398 breaths per minute. The heart rate estimation median accuracy achieved 88.0654% with median error of ± 7.9338 beats per minute. Due to the good performance of the propose novel automated estimation algorithm, the patented novel non-contact biomotion sensor can be an excellent tool for long term continuous sleep monitoring for CHF patients in the home environment in an ultra-convenient fashion.

  13. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    Science.gov (United States)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-01

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  14. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    Directory of Open Access Journals (Sweden)

    S. J. Cooper

    2017-07-01

    Full Text Available Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100–200 % for individual events. Here, we use observations of particle size distribution (PSD, fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR measurements at the Atmospheric Radiation Measurement (ARM North Slope Alaska (NSA Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a −18 % difference relative to nearby National Weather Service (NWS observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from −64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  15. Estimation of the rate of egg contamination from Salmonella-infected chickens.

    Science.gov (United States)

    Arnold, M E; Martelli, F; McLaren, I; Davies, R H

    2014-02-01

    Salmonella enterica serovar Enteritidis (S. Enteritidis) is one of the most prevalent causes for human gastroenteritis and is by far the predominant Salmonella serovar among human cases, followed by Salmonella Typhimurium. Contaminated eggs produced by infected laying hens are thought to be the main source of human infection with S. Enteritidis throughout the world. Although previous studies have looked at the proportion of infected eggs from infected flocks, there is still uncertainty over the rate at which infected birds produce contaminated eggs. The aim of this study was to estimate the rate at which infected birds produce contaminated egg shells and egg contents. Data were collected from two studies, consisting of 15 and 20 flocks, respectively. Faecal and environmental sampling and testing of ovaries/caeca from laying hens were carried out in parallel with (i) for the first study, testing 300 individual eggs, contents and shells together and (ii) for the second study, testing 4000 eggs in pools of six, with shells and contents tested separately. Bayesian methods were used to estimate the within-flock prevalence of infection from the faecal and hen post-mortem data, and this was related to the proportion of positive eggs. Results indicated a linear relationship between the rate of contamination of egg contents and the prevalence of infected chickens, but a nonlinear (quadratic) relationship between infection prevalence and the rate of egg shell contamination, with egg shell contamination occurring at a much higher rate than that of egg contents. There was also a significant difference in the rate of egg contamination between serovars, with S. Enteritidis causing a higher rate of contamination of egg contents and a lower rate of contamination of egg shells compared to non-S. Enteritidis serovars. These results will be useful for risk assessments of human exposure to Salmonella-contaminated eggs. © 2013 Crown copyright. This article is published with the

  16. An estimation of finger-tapping rates and load capacities and the effects of various factors.

    Science.gov (United States)

    Ekşioğlu, Mahmut; İşeri, Ali

    2015-06-01

    The aim of this study was to estimate the finger-tapping rates and finger load capacities of eight fingers (excluding thumbs) for a healthy adult population and investigate the effects of various factors on tapping rate. Finger-tapping rate, the total number of finger taps per unit of time, can be used as a design parameter of various products and also as a psychomotor test for evaluating patients with neurologic problems. A 1-min tapping task was performed by 148 participants with maximum volitional tempo for each of eight fingers. For each of the tapping tasks, the participant with the corresponding finger tapped the associated key in the standard position on the home row of a conventional keyboard for touch typing. The index and middle fingers were the fastest fingers for both hands, and little fingers the slowest. All dominant-hand fingers, except little finger, had higher tapping rates than the fastest finger of the nondominant hand. Tapping rate decreased with age and smokers tapped faster than nonsmokers. Tapping duration and exercise had also significant effect on tapping rate. Normative data of tapping rates and load capacities of eight fingers were estimated for the adult population. In designs of psychomotor tests that require the use of tapping rate or finger load capacity data, the effects of finger, age, smoking, and tapping duration need to be taken into account. The findings can be used for ergonomic designs requiring finger-tapping capacity and also as a reference in psychomotor tests. © 2015, Human Factors and Ergonomics Society.

  17. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora

    Directory of Open Access Journals (Sweden)

    Ryosuke Takahira

    2016-10-01

    Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.

  18. A method to estimate emission rates from industrial stacks based on neural networks.

    Science.gov (United States)

    Olcese, Luis E; Toselli, Beatriz M

    2004-11-01

    This paper presents a technique based on artificial neural networks (ANN) to estimate pollutant rates of emission from industrial stacks, on the basis of pollutant concentrations measured on the ground. The ANN is trained on data generated by the ISCST3 model, widely accepted for evaluation of dispersion of primary pollutants as a part of an environmental impact study. Simulations using theoretical values and comparison with field data are done, obtaining good results in both cases at predicting emission rates. The application of this technique would allow the local environment authority to control emissions from industrial plants without need of performing direct measurements inside the plant. copyright 2004 Elsevier Ltd.

  19. Comparison of estimated and background subsidence rates in Texas-Louisiana geopressured geothermal areas

    Energy Technology Data Exchange (ETDEWEB)

    Lee, L.M.; Clayton, M.; Everingham, J.; Harding, R.C.; Massa, A.

    1982-06-01

    A comparison of background and potential geopressured geothermal development-related subsidence rates is given. Estimated potential geopressured-related rates at six prospects are presented. The effect of subsidence on the Texas-Louisiana Gulf Coast is examined including the various associated ground movements and the possible effects of these ground movements on surficial processes. The relationships between ecosystems and subsidence, including the capability of geologic and biologic systems to adapt to subsidence, are analyzed. The actual potential for environmental impact caused by potential geopressured-related subsidence at each of four prospects is addressed. (MHR)

  20. Methods matter: considering locomotory mode and respirometry technique when estimating metabolic rates of fishes

    Science.gov (United States)

    Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.

    2016-01-01

    Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471

  1. Glomerular Filtration Rate Estimation by Serum Creatinine or Serum Cystatin C in Preterm (<31 Weeks) Neonates.

    Science.gov (United States)

    Garg, Parvesh; Hidalgo, Guillermo

    2017-06-15

    Glomerular filtration rate (GFR) was estimated by serum creatinine (Schwartz's equation) and serum cystatin C (Filler's equation) in preterm neonates (24-31 weeks of gestation) in a prospective cohort study. Serum creatinine and cystatin C was obtained at birth and then every two weeks during the first month. We found a poor fit between two methods, and a steadier GFR assessment by cystatin C.

  2. Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.

    Science.gov (United States)

    Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle

    2008-08-01

    Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.

  3. Estimating relative physical workload using heart rate monitoring: a validation by whole-body indirect calorimetry.

    Science.gov (United States)

    Garet, Martin; Boudet, Gil; Montaurier, Christophe; Vermorel, Michel; Coudert, Jean; Chamoux, Alain

    2005-05-01

    Measuring physical workload in occupational medicine is fundamental for risk prevention. An indirect measurement of total and relative energy expenditure (EE) from heart rate (HR) is widely used but it has never been validated. The aim of this study was to validate this HR-estimated energy expenditure (HREEE) method against whole-body indirect calorimetry. Twenty-four-hour HR and EE values were recorded continuously in a calorimetric chambers for 52 adult males and females (19-65 years). An 8-h working period was retained, comprising several exercise sessions on a cycloergometer at intensities up to 65% of the peak rate of oxygen uptake. HREEE was calculated with reference to cardiac reserve. A corrected HREEE (CHREEE) was also calculated with a modification to the lowest value of cardiac reserve. Both values were further compared to established methods: the flex-HR method, and the use of a 3rd order polynomial relationship to estimate total and relative EE. No significant difference was found in total EE when measured in a calorimetric chamber or estimated from CHREEE for the working period. A perfect linear and identity relationship was found between CHREEE and energy reserve values for intensities ranging from 15% to 65%. Relative physical workload can be accurately assessed from HR recordings when expressed in CHREEE between 15% to 65%, and EE can be accurately estimated using the CHREEE method.

  4. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  5. Motion correction for improved estimation of heart rate using a visual spectrum camera

    Science.gov (United States)

    Tarbox, Elizabeth A.; Rios, Christian; Kaur, Balvinder; Meyer, Shaun; Hirt, Lauren; Tran, Vy; Scott, Kaitlyn; Ikonomidou, Vasiliki

    2017-05-01

    Heart rate measurement using a visual spectrum recording of the face has drawn interest over the last few years as a technology that can have various health and security applications. In our previous work, we have shown that it is possible to estimate the heart beat timing accurately enough to perform heart rate variability analysis for contactless stress detection. However, a major confounding factor in this approach is the presence of movement, which can interfere with the measurements. To mitigate the effects of movement, in this work we propose the use of face detection and tracking based on the Karhunen-Loewe algorithm in order to counteract measurement errors introduced by normal subject motion, as expected during a common seated conversation setting. We analyze the requirements on image acquisition for the algorithm to work, and its performance under different ranges of motion, changes of distance to the camera, as well and the effect of illumination changes due to different positioning with respect to light sources on the acquired signal. Our results suggest that the effect of face tracking on visual-spectrum based cardiac signal estimation depends on the amplitude of the motion. While for larger-scale conversation-induced motion it can significantly improve estimation accuracy, with smaller-scale movements, such as the ones caused by breathing or talking without major movement errors in facial tracking may interfere with signal estimation. Overall, employing facial tracking is a crucial step in adapting this technology to real-life situations with satisfactory results.

  6. Uncertainties in Instantaneous Rainfall Rate Estimates: Satellite vs. Ground-Based Observations

    Science.gov (United States)

    Amitai, E.; Huffman, G. J.; Goodrich, D. C.

    2012-12-01

    High-resolution precipitation intensities are significant in many fields. For example, hydrological applications such as flood forecasting, runoff accommodation, erosion prediction, and urban hydrological studies depend on an accurate representation of the rainfall that does not infiltrate the soil, which is controlled by the rain intensities. Changes in the rain rate pdf over long periods are important for climate studies. Are our estimates accurate enough to detect such changes? While most evaluation studies are focusing on the accuracy of rainfall accumulation estimates, evaluation of instantaneous rainfall intensity estimates is relatively rare. Can a speceborne radar help in assessing ground-based radar estimates of precipitation intensities or is it the other way around? In this presentation we will provide some insight on the relative accuracy of instantaneous precipitation intensity fields from satellite and ground-based observations. We will examine satellite products such as those from the TRMM Precipitation Radar and those from several passive microwave imagers and sounders by comparing them with advanced high-resolution ground-based products taken at overpass time (snapshot comparisons). The ground based instantaneous rain rate fields are based on in situ measurements (i.e., the USDA/ARS Walnut Gulch dense rain gauge network), remote sensing observations (i.e., the NOAA/NSSL NMQ/Q2 radar-only national mosaic), and multi-sensor products (i.e., high-resolution gauge adjusted radar national mosaics, which we have developed by applying a gauge correction on the Q2 products).

  7. Pollen pool heterogeneity in jack pine (Pinus banksiana Lamb.): a problem for estimating outcrossing rates?

    Science.gov (United States)

    Fu, Y B; Knowles, P; Perry, D J

    1992-02-01

    Pollen pool heterogeneity, which violates an assumption of the mixed-mating model, is a major potential problem in measuring plant mating systems. In this study, isozyme markers were used to examine pollen pool heterogeneity in two natural populations of jack pine, Pinus banksiana Lamb., in northwestern Ontario, Canada. Population multilocus estimates of outcrossing rate ranged from 0.83 to 0.95 and differed significantly between populations. Single-tree multilocus outcrossing rates were found to be homogeneous among trees in both populations. Computer simulation studies indicated that a consanguineous pollen pool (pollen gametes related to the mother tree) was capable of biasing population outcrossing estimates downward. Random pollen pool heterogeneity (uncorrelated with maternal genotypes) did not appear to affect population outcrossing estimates in the simulations. Heterogeneity G-tests and Spearman rank tests showed that pollen pool heterogeneity existed in the two natural populations examined; however, it did not have a major effect on population outcrossing estimates, since the consanguineous pollen pool detected was probably a relatively minor component of the outcross pollen pool in both populations. In addition, heterogeneity G-tests were found to be not sensitive in detecting pollen pool heterogeneity caused by consanguineous pollen pool.

  8. Science in the Making: Right Hand, Left Hand. III: Estimating historical rates of left-handedness.

    Science.gov (United States)

    McManus, I C; Moore, James; Freegard, Matthew; Rawles, Richard

    2010-01-01

    The BBC television programme Right Hand, Left Hand, broadcast in August 1953, used a postal questionnaire to ask viewers about their handedness. Respondents were born between 1864 and 1948, and in principle therefore the study provides information on rates of left-handedness in those born in the nineteenth century, a group for which few data are otherwise available. A total of 6,549 responses were received, with an overall rate of left-handedness of 15.2%, which is substantially above that expected for a cohort born in the nineteenth and early twentieth centuries. Left-handers are likely to respond preferentially to surveys about handedness, and the extent of over-response can be estimated in modern control data obtained from a handedness website, from the 1953 BBC data, and from Crichton-Browne's 1907 survey, in which there was also a response bias. Response bias appears to have been growing, being relatively greater in the most modern studies. In the 1953 data there is also evidence that left-handers were more common among later rather than early responders, suggesting that left-handers may have been specifically recruited into the study, perhaps by other left-handers who had responded earlier. In the present study the estimated rate of bias was used to correct the nineteenth-century BBC data, which was then combined with other available data as a mixture of two constrained Weibull functions, to obtain an overall estimate of handedness rates in the nineteenth century. The best estimates are that left-handedness was at its nadir of about 3% for those born between about 1880 and 1900. Extrapolating backwards, the rate of left-handedness in the eighteenth century was probably about 10%, with the decline beginning in about 1780, and reaching around 7% in about 1830, although inevitably there are many uncertainties in those estimates. What does seem indisputable is that rates of left-handedness fell during most of the nineteenth century, only subsequently to rise in

  9. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  10. Estimating inbreeding rates in natural populations: Addressing the problem of incomplete pedigrees

    Science.gov (United States)

    Miller, Mark P.; Haig, Susan M.; Ballou, Jonathan D.; Steel, E. Ashley

    2017-01-01

    Understanding and estimating inbreeding is essential for managing threatened and endangered wildlife populations. However, determination of inbreeding rates in natural populations is confounded by incomplete parentage information. We present an approach for quantifying inbreeding rates for populations with incomplete parentage information. The approach exploits knowledge of pedigree configurations that lead to inbreeding coefficients of F = 0.25 and F = 0.125, allowing for quantification of Pr(I|k): the probability of observing pedigree I given the fraction of known parents (k). We developed analytical expressions under simplifying assumptions that define properties and behavior of inbreeding rate estimators for varying values of k. We demonstrated that inbreeding is overestimated if Pr(I|k) is not taken into consideration and that bias is primarily influenced by k. By contrast, our new estimator, incorporating Pr(I|k), is unbiased over a wide range of values of kthat may be observed in empirical studies. Stochastic computer simulations that allowed complex inter- and intragenerational inbreeding produced similar results. We illustrate the effects that accounting for Pr(I|k) can have in empirical data by revisiting published analyses of Arabian oryx (Oryx leucoryx) and Red deer (Cervus elaphus). Our results demonstrate that incomplete pedigrees are not barriers for quantifying inbreeding in wild populations. Application of our approach will permit a better understanding of the role that inbreeding plays in the dynamics of populations of threatened and endangered species and may help refine our understanding of inbreeding avoidance mechanisms in the wild.

  11. Glomerular filtration rate estimated from the uptake phase of 99mTc-DTPA renography in chronic renal failure

    DEFF Research Database (Denmark)

    Petersen, L J; Petersen, J R; Talleruphuus, U

    1999-01-01

    The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea.......The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea....

  12. Path-integral virial estimator for reaction-rate calculation based on the quantum instanton approximation.

    Science.gov (United States)

    Yang, Sandy; Yamamoto, Takeshi; Miller, William H

    2006-02-28

    The quantum instanton approximation is a type of quantum transition-state theory that calculates the chemical reaction rate using the reactive flux correlation function and its low-order derivatives at time zero. Here we present several path-integral estimators for the latter quantities, which characterize the initial decay profile of the flux correlation function. As with the internal energy or heat-capacity calculation, different estimators yield different variances (and therefore different convergence properties) in a Monte Carlo calculation. Here we obtain a virial (-type) estimator by using a coordinate scaling procedure rather than integration by parts, which allows more computational benefits. We also consider two different methods for treating the flux operator, i.e., local-path and global-path approaches, in which the latter achieves a smaller variance at the cost of using second-order potential derivatives. Numerical tests are performed for a one-dimensional Eckart barrier and a model proton transfer reaction in a polar solvent, which illustrates the reduced variance of the virial estimator over the corresponding thermodynamic estimator.

  13. Combined methodology for estimating dose rates and health effects from exposure to radioactive pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Leggett, R.W.; Yalcintas, M.G.

    1980-12-01

    The work described in the report is basically a synthesis of two previously existing computer codes: INREM II, developed at the Oak Ridge National Laboratory (ORNL); and CAIRD, developed by the Environmental Protection Agency (EPA). The INREM II code uses contemporary dosimetric methods to estimate doses to specified reference organs due to inhalation or ingestion of a radionuclide. The CAIRD code employs actuarial life tables to account for competing risks in estimating numbers of health effects resulting from exposure of a cohort to some incremental risk. The combined computer code, referred to as RADRISK, estimates numbers of health effects in a hypothetical cohort of 100,000 persons due to continuous lifetime inhalation or ingestion of a radionuclide. Also briefly discussed in this report is a method of estimating numbers of health effects in a hypothetical cohort due to continuous lifetime exposure to external radiation. This method employs the CAIRD methodology together with dose conversion factors generated by the computer code DOSFACTER, developed at ORNL; these dose conversion factors are used to estimate dose rates to persons due to radionuclides in the air or on the ground surface. The combination of the life table and dosimetric guidelines for the release of radioactive pollutants to the atmosphere, as required by the Clean Air Act Amendments of 1977.

  14. Estimated glomerular filtration rate in a sample of university students. Argentina, 2014-2015. Preliminary results

    Directory of Open Access Journals (Sweden)

    Cecilia Brissón

    2017-04-01

    Full Text Available Introduction: The high global prevalence of Chronic Kidney Disease (CKD and its implications for public health are well recognized. There are few studies related to the values of glomerular filtration rate (GFR and its staging in young people. GFR was estimated by creatinine clearance (CrCl by formulas, totals, by category G of GFR and sex on a sample of Argentine students. Agreement between estimation methods was also evaluated. Methods: Descriptive study of 75 students during the period May 2014-September 2015. GFR was estimated by Cockcroft-Gault (CG, MDRD-4 and CrCl using creatinine not traceable by IDMS and MDRD-4 IDMS and CKD-EPI with traceable creatinine. Results: Mean values and frequencies depending on the estimation method. Moderate to good agreement between CKD-EPI and CG and between CrCl and CGwas found. Conclusions: Although these findings are preliminary and should be confirmed in a larger sample, they help to describe higher GFR stages in a young population and to visualize differences between estimators used.

  15. Estimates of biogenic methane production rates in deep marine sediments at Hydrate Ridge, Cascadia margin.

    Science.gov (United States)

    Colwell, F S; Boyd, S; Delwiche, M E; Reed, D W; Phelps, T J; Newby, D T

    2008-06-01

    Methane hydrate found in marine sediments is thought to contain gigaton quantities of methane and is considered an important potential fuel source and climate-forcing agent. Much of the methane in hydrates is biogenic, so models that predict the presence and distribution of hydrates require accurate rates of in situ methanogenesis. We estimated the in situ methanogenesis rates in Hydrate Ridge (HR) sediments by coupling experimentally derived minimal rates of methanogenesis to methanogen biomass determinations for discrete locations in the sediment column. When starved in a biomass recycle reactor, Methanoculleus submarinus produced ca. 0.017 fmol methane/cell/day. Quantitative PCR (QPCR) directed at the methyl coenzyme M reductase subunit A gene (mcrA) indicated that 75% of the HR sediments analyzed contained methane produced/g sediment/day for the samples with fewer methanogens than the QPCR method could detect. The actual rates could vary depending on the real number of methanogens and various seafloor parameters that influence microbial activity. However, our calculated rate is lower than rates previously reported for such sediments and close to the rate derived using geochemical modeling of the sediments. These data will help to improve models that predict microbial gas generation in marine sediments and determine the potential influence of this source of methane on the global carbon cycle.

  16. Ra isotopes in trees: Their application to the estimation of heartwood growth rates and tree ages

    Science.gov (United States)

    Hancock, Gary J.; Murray, Andrew S.; Brunskill, Gregg J.; Argent, Robert M.

    2006-12-01

    The difficulty in estimating growth rates and ages of tropical and warm-temperate tree species is well known. However, this information has many important environmental applications, including the proper management of native forests and calculating uptake and release of atmospheric carbon. We report the activities of Ra isotopes in the heartwood, sapwood and leaves of six tree species, and use the radial distribution of the 228Ra/226Ra activity ratio in the stem of the tree to estimate the rate of accretion of heartwood. A model is presented in which dissolved Ra in groundwater is taken up by tree roots, translocated to sapwood in a chemically mobile (ion-exchangeable) form, and rendered immobile as it is transferred to heartwood. Uptake of 232Th and 230Th (the parents of 228Ra and 226Ra) is negligible. The rate of heartwood accretion is determined from the radioactive decay of 228Ra (half-life 5.8 years) relative to long-lived 226Ra (half-life 1600 years), and is relevant to growth periods of up to 50 years. By extrapolating the heartwood accretion rate to the entire tree ring record the method also appears to provide realistic estimates of tree age. Eight trees were studied (three of known age, 72, 66 and 35 years), including three Australian hardwood eucalypt species, two mangrove species, and a softwood pine (P. radiata). The method indicates that the rate of growth ring formation is species and climate dependent, varying from 0.7 rings yr-1 for a river red gum (E. camaldulensis) to around 3 rings yr-1 for a tropical mangrove (X. mekongensis).

  17. Automating and estimating glomerular filtration rate for dosing medications and staging chronic kidney disease.

    Science.gov (United States)

    Trinkley, Katy E; Nikels, S Michelle; Page, Robert L; Joy, Melanie S

    2014-01-01

    The purpose of this paper is to serve as a review for primary care providers on the bedside methods for estimating glomerular filtration rate (GFR) for dosing and chronic kidney disease (CKD) staging and to discuss how automated health information technologies (HIT) can enhance clinical documentation of staging and reduce medication errors in patients with CKD. A nonsystematic search of PubMed (through March 2013) was conducted to determine the optimal approach to estimate GFR for dosing and CKD staging and to identify examples of how automated HITs can improve health outcomes in patients with CKD. Papers known to the authors were included, as were scientific statements. Articles were chosen based on the judgment of the authors. Drug-dosing decisions should be based on the method used in the published studies and package labeling that have been determined to be safe, which is most often the Cockcroft-Gault formula unadjusted for body weight. Although Modification of Diet in Renal Disease is more commonly used in practice for staging, the CKD-Epidemiology Collaboration (CKD-EPI) equation is the most accurate formula for estimating the CKD staging, especially at higher GFR values. Automated HITs offer a solution to the complexity of determining which equation to use for a given clinical scenario. HITs can educate providers on which formula to use and how to apply the formula in a given clinical situation, ultimately improving appropriate medication and medical management in CKD patients. Appropriate estimation of GFR is key to optimal health outcomes. HITs assist clinicians in both choosing the most appropriate GFR estimation formula and in applying the results of the GFR estimation in practice. Key limitations of the recommendations in this paper are the available evidence. Further studies are needed to better understand the best method for estimating GFR.

  18. Estimating the Attack Rate of Pregnancy-Associated Listeriosis during a Large Outbreak

    Directory of Open Access Journals (Sweden)

    Maho Imanishi

    2015-01-01

    Full Text Available Background. In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. Methods. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. Results. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500–12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Conclusions. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure.

  19. A novel technique for fetal heart rate estimation from Doppler ultrasound signal

    Directory of Open Access Journals (Sweden)

    Jezewski Janusz

    2011-10-01

    Full Text Available Abstract Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements.

  20. MDRD or CKD-EPI for glomerular filtration rate estimation in living kidney donors.

    Science.gov (United States)

    Burballa, Carla; Crespo, Marta; Redondo-Pachón, Dolores; Pérez-Sáez, María José; Mir, Marisa; Arias-Cabrales, Carlos; Francés, Albert; Fumadó, Lluis; Cecchini, Lluis; Pascual, Julio

    2017-04-12

    The evaluation of the measured Glomerular Filtration Rate (mGFR) or estimated Glomerular Filtration Rate (eGFR) is key in the proper assessment of the renal function of potential kidney donors. We aim to study the correlation between glomerular filtration rate estimation equations and the measured methods for determining renal function. We analysed the relationship between baseline GFR values measured by Tc-(99)m-DTPA (diethylene-triamine-pentaacetate) and those estimated by the four-variable Modification of Diet in Renal Disease (MDRD4) and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations in a series of living donors at our institution. We included 64 donors (70.6% females; mean age 48.3±11 years). Baseline creatinine was 0.8±0.1 mg/dl and it was 1.1±0.2 mg/dl one year after donation. The equations underestimated GFR when measured by Tc(99)m-DTPA (MDRD4-9.4 ± 25ml/min, PEPI-4.4 ± 21ml/min). The correlation between estimation equations and the measured method was superior for CKD-EPI (r=.41; PEPI) one year after donation. This means a mean eGFR reduction of 28.2±16.7 ml/min (MDRD4) and 27.31±14.4 ml/min (CKD-EPI) at one year. In our experience, CKD-EPI is the equation that better correlates with mGFR-Tc(99)m-DTPA when assessing renal function for donor screening purposes. Copyright © 2017 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.

  1. Estimated glomerular filtration rate in patients with type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Paula Caitano Fontela

    2014-12-01

    Full Text Available Objective: to estimate the glomerular filtration using the Cockcroft-Gault (CG, Modification of Diet in Renal Disease (MDRD, and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equations, and serum creatinine in the screening of reduced renal function in patients with type two diabetes (T2DM enrolled in the Family Health Strategy (ESF, Brazilian federal health-care program. Methods: a cross-sectional descriptive and analytical study was conducted. The protocol consisted of sociodemographics, physical examination and biochemical tests. Renal function was analyzed through serum creatinine and glomerular filtration rate (GFR estimated according to the CG, MDRD and CKD-EPI equations, available on the websites of the Brazilian Nephrology Society (SBN and the (NKF. Results: 146 patients aged 60.9±8.9 years were evaluated; 64.4% were women. The prevalence of serum creatinine >1.2 mg/dL was 18.5% and GFR <60 mL/min/1.73m2 totaled 25.3, 36.3 and 34.2% when evaluated by the equations CG, MDRD and CKD-EPI, respectively. Diabetic patients with reduced renal function were older, had long-term T2DM diagnosis, higher systolic blood pressure and higher levels of fasting glucose, compared to diabetics with normal renal function. Creatinine showed strong negative correlation with the glomerular filtration rate estimated using CG, MDRD and CKD-EPI (-0.64, -0.87, -0.89 equations, respectively. Conclusion: the prevalence of individuals with reduced renal function based on serum creatinine was lower, reinforcing the need to follow the recommendations of the SBN and the National Kidney Disease Education Program (NKDEP in estimating the value of the glomerular filtration rate as a complement to the results of serum creatinine to better assess the renal function of patients.

  2. Quality control of slope-intercept measurements of glomerular filtration rate using single-sample estimates.

    Science.gov (United States)

    Fleming, John S; Persaud, Linda; Zivanovic, Maureen A

    2005-08-01

    Measurement of glomerular filtration rate (GFR) using the slope-intercept technique determines the plasma clearance curve by fitting a straight line to the logarithm of sample count rate. When two samples are used there is no check on the validity of curve fitting. GFR may also be estimated from single-sample concentrations. This study describes a method of quality control for the two-sample technique using the agreement between the one-sample and two-sample estimates. GFR measurements using Tc-DTPA were performed on 225 adults and 100 children using two samples taken between 2 h and 4 h post-injection. The two-sample values obtained using the British Nuclear Medicine Guidelines slope-intercept technique were compared to one-sample estimates obtained using a new general equation. Equations describing the variation of GFR error with GFR value were defined. These were used to determine action levels giving the limits of expected agreement between slope-intercept and single-sample values. The use of these action levels for quality control was demonstrated in a further 120 GFR measurements. The variation of single-sample error estimate with GFR depended both on the time of sample and body surface area. For specific sample groups, the error variation with GFR could be approximated using a truncated quadratic equation. Four studies were identified as failing quality control in the dataset used to define the error equations. Two studies failed in the test dataset. One-sample equations give reliable estimates of GFR, which may be used for quality control of slope-intercept GFR assessment.

  3. In-situ estimation of MOCVD growth rate via a modified Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Woo, W.W.; Svoronos, S.A. [Univ. of Florida, Gainesville, FL (United States). Dept. of Chemical Engineering; Sankur, H.O.; Bajaj, J. [Rockwell International Corp., Thousand Oaks, CA (United States); Irvine, S.J.C. [North East Wales Inst. Plas Coch, Wrexham (United Kingdom)

    1996-05-01

    In-situ laser reflectance monitoring of metal-organic chemical vapor deposition (MOCVD) is an effective way to monitor growth rate and epitaxial layer thickness of a variety of III-V and II-VI semiconductors. Materials with low optical extinction coefficients, such as ZnTe/GaAs and AlAs/GaAs for a 6,328 {angstrom} HeNe laser, are ideal for such an application. An extended Kalman filter modified to include a variable forgetting factor was applied to the MOCVD systems. The filter was able to accurately estimate thickness and growth rate while filtering out process noise and cope with sudden changes in growth rate, reflectance drift, and bias. Due to the forgetting factor, the Kalman filter was successful, even when based on very simple process models.

  4. Plutonium Discharge Rates and Spent Nuclear Fuel Inventory Estimates for Nuclear Reactors Worldwide

    Energy Technology Data Exchange (ETDEWEB)

    Brian K. Castle; Shauna A. Hoiland; Richard A. Rankin; James W. Sterbentz

    2012-09-01

    This report presents a preliminary survey and analysis of the five primary types of commercial nuclear power reactors currently in use around the world. Plutonium mass discharge rates from the reactors’ spent fuel at reload are estimated based on a simple methodology that is able to use limited reactor burnup and operational characteristics collected from a variety of public domain sources. Selected commercial reactor operating and nuclear core characteristics are also given for each reactor type. In addition to the worldwide commercial reactors survey, a materials test reactor survey was conducted to identify reactors of this type with a significant core power rating. Over 100 material or research reactors with a core power rating >1 MW fall into this category. Fuel characteristics and spent fuel inventories for these material test reactors are also provided herein.

  5. How to estimate heart rate from pulse rate reported by oscillometric method in atrial fibrillation: The value of pulse rate variation.

    Science.gov (United States)

    Shuai, Wei; Wang, Xi-Xing; Hong, Kui; Peng, Qiang; Li, Ju-Xiang; Li, Ping; Cheng, Xiao-Shu; Su, Hai

    2016-11-01

    To evaluate whether the mean pulse rate (PR) from three oscillometric blood pressure (BP) measurements provides an accurate estimation of electrocardiogram ventricular rate (HR) in patients with permanent atrial fibrillation (AF). BP and PR were measured with an oscillometric BP device for three times with one-minute interval. Simultaneously, one-minute electrocardiogram was also recorded for three times. The first PR and HR values were recorded as PR1 and HR1, and the averages of three PR and HR values as mean PR (mPR) and mean HR (mHR). Meanwhile, the differences between the highest and lowest values among the three PR and HR were calculated as ΔPR and ΔHR. Furthermore, the patients were stratified on ΔPR into the 0-15 and >15 subgroups. A moderate positive correlation existed between PR1 and HR1 or mPR and mHR, and Bland-Altman plot also showed quite wide 95% limits between them. Meanwhile, ΔPR was significantly higher than ΔHR (12.1±8.6 vs 3.6±2.5bpm, Poscillometric BP device could provide a clinically accepted estimation of mean HR of 3min in AF patients with ΔPR 0-15bpm and mean PR ≤100bpm. Copyright © 2016. Published by Elsevier Ireland Ltd.

  6. Genome-Wide Estimates of Transposable Element Insertion and Deletion Rates in Drosophila Melanogaster

    Science.gov (United States)

    Adrion, Jeffrey R.; Song, Michael J.; Schrider, Daniel R.; Hahn, Matthew W.

    2017-01-01

    Abstract Knowing the rate at which transposable elements (TEs) insert and delete is critical for understanding their role in genome evolution. We estimated spontaneous rates of insertion and deletion for all known, active TE superfamilies present in a set of Drosophila melanogaster mutation-accumulation (MA) lines using whole genome sequence data. Our results demonstrate that TE insertions far outpace TE deletions in D. melanogaster. We found a significant effect of background genotype on TE activity, with higher rates of insertions in one MA line. We also found significant rate heterogeneity between the chromosomes, with both insertion and deletion rates elevated on the X relative to the autosomes. Further, we identified significant associations between TE activity and chromatin state, and tested for associations between TE activity and other features of the local genomic environment such as TE content, exon content, GC content, and recombination rate. Our results provide the most detailed assessment of TE mobility in any organism to date, and provide a useful benchmark for both addressing theoretical predictions of TE dynamics and for exploring large-scale patterns of TE movement in D. melanogaster and other species. PMID:28338986

  7. Sexual reconviction rates in the United Kingdom and actuarial risk estimates.

    Science.gov (United States)

    Craig, Leam A; Browne, Kevin D; Stringer, Ian; Hogue, Todd E

    2008-01-01

    Assessing the risk of further offending behavior by adult sexual perpetrators of children is highly relevant and important to professionals involved in child protection. Recent progress in assessing risk in sexual offenders has established the validity of actuarial measures, although there continues to be some debate about the application of these instruments. This paper summarizes the debate between clinical and actuarial approaches and reviews the "base rate" for United Kingdom sexual offense reconviction. A review of the literature revealed 16 UK sexual reconviction studies, 8 using incarcerated samples (N=5,915) and 8 using non-incarcerated samples (N=1,274). UK estimates of sexual reconviction rates are compared with European and North American studies. The mean sexual reconviction rates for the incarcerated sample at 2 years (6.0%), 4 years (7.8%) and 6 years or more (19.5%) were higher than that of the comparative non-incarcerated sample at 2 years (5.7%), up to 4 years (5.9%), and 6 years or more (15.5%). The overall sexual reconviction rate for both samples combined was 5.8% at 2 years, and 17.5% at 6 years or more. The sexual reconviction rate for incarcerated sexual offenders is higher than that of non-incarcerated sexual offenders. The UK sexual reconviction rates were comparable with European and North American studies.

  8. Predictive performance of 12 equations for estimating glomerular filtration rate in severely obese patients

    Directory of Open Access Journals (Sweden)

    Ary Serpa Neto

    2011-09-01

    Full Text Available Objective: Considering that the Cockcroft-Gault formula and theequation of diet modification in renal disease are amply used in clinical practice to estimate the glomerular filtration rate, although they seem to have low accuracy in obese patients, the present study intends to evaluate the predictive performance of 12 equations used to estimate the glomerular filtration rate in obese patients. Methods: This is a cross-sectional retrospective study, conducted between 2007 and 2008 and carried out at a university, of 140 patients with severe obesity (mean body mass index 44 ± 4.4 kg/m2. The glomerular filtration rate was determined by means of 24-hour urine samples. Patients were classified into one or more of the four subgroups: impaired glucose tolerance (n = 43, diabetic (n = 24, metabolic syndrome (n = 76, and/or hypertension (n = 66. We used bias, precision, and accuracy to assess the predictive performance of each equation in the entire group and in the subgroups. Results: In renal disease, Cockcroft-Gault’s formula and the diet modification equation are not precise in severely obese patients (precision: 40.9 and 33.4, respectively. Sobh’s equationshowed no bias in the general group or in two subgroups. Salazar-Corcoran’s and Sobh’s equations showed no bias for the entire group(Bias: -5.2, 95% confidence interval (CI = -11.4, 1.0, and 6. 2; 95%CI = -0.3, 12.7, respectively. All the other equations were imprecise for the entire group. Conclusion: Of the equations studied, those of Sobh and Salazar-Corcoran seem to be the best for estimating the glomerular filtration rate in severely obese patients analyzed in our study.

  9. Large-Eddy Simulation of Oil Slicks from Deep Water Blowouts: Effects of Droplet Buoyancy and Langmuir Turbulence

    Science.gov (United States)

    Chamecki, M.; Yang, D.; Meneveau, C. V.

    2013-12-01

    Deep water blowouts generate plumes of oil droplets that rise through, and interact with various layers of the ocean. When plumes reach the ocean mixed layer (OML), the interactions among oil droplet plume, Ekman Spiral and Langmuir turbulence strongly affect the final rates of dilution and bio-degradation. The present study aims at developing a large-eddy simulation (LES) capability for the study of the physical distribution and dispersion of oil droplets under the action of physical oceanographic processes in the OML. In the current LES approach, the velocity and temperature fields are simulated using a hybrid pseudo-spectral and finite-difference scheme; the oil field is described by an Eulerian concentration field and it is simulated using a bounded finite-volume scheme. Fluid accelerations induced by buoyancy of the oil plume are included, and a number of subgrid-scale models for the flow solver are implemented and tested. The LES capability is then applied to the simulation of oil plume dispersion in the OML. Graphical visualization of the LES results shows surface oil slick distribution consistent with the satellite and aerial images of surface oil slicks reported in the literature. Different combinations of Lamgmuir turbulence and droplet size lead to different oil slick patterns at the surface and significantly impact oil concentration. Possible effects for bio-degradation are also discussed. Funding from the GoMRI RFP-II is gratefully acknowledged.

  10. Derivation of new equations to estimate glomerular filtration rate in pediatric oncology patients.

    Science.gov (United States)

    Millisor, Vanessa E; Roberts, Jessica K; Sun, Yilun; Tang, Li; Daryani, Vinay M; Gregornik, David; Cross, Shane J; Ward, Deborah; Pauley, Jennifer L; Molinelli, Alejandro; Brennan, Rachel C; Stewart, Clinton F

    2017-06-02

    Monitoring renal function is critical in treating pediatric patients, especially when dosing nephrotoxic agents. We evaluated the validity of the bedside Schwartz and Brandt equations in pediatric oncology patients and developed new equations for estimated glomerular filtration rate (eGFR) in these patients. A retrospective analysis was conducted comparing eGFR using the bedside Schwartz and Brandt equations to measured GFR (mGFR) from technetium-99m diethylenetriamine pentaacetic acid ( 99m Tc-DTPA) between January 2007 and August 2013. An improved equation to estimate GFR was developed, simplified, and externally validated in a cohort of patients studied from September 2013 to June 2015. Carboplatin doses calculated from 99m Tc-DTPA were compared with doses calculated by GFR-estimating equations. Overall, the bedside Schwartz and Brandt equations did not precisely or accurately predict measured GFR (mGFR). Using a data subset, we developed a five-covariate equation, which included height, serum creatinine, age, blood urea nitrogen (BUN), and gender, and a simplified version (two-covariates), which contained height and serum creatinine. These equations were used to estimate GFR in 2036 studies, resulting in precise and accurate predictors of mGFR values. Equations were validated in an external cohort of 570 studies; both new equations were more accurate in calculating carboplatin doses than either the bedside Schwartz or Brandt equation. Two new equations were developed to estimate GFR in pediatric oncology patients, both of which did a better job at estimating mGFR than published equations.

  11. Estimation of sweat rates during cycling exercise by means of the closed chamber condenser technology.

    Science.gov (United States)

    Clarys, P; Clijsen, R; Barel, A O; Schouteden, R; van Olst, B; Aerenhouts, D

    2017-02-01

    Knowledge of local sweating patterns is of importance in occupational and exercise physiology settings. The recently developed closed chamber condenser technology (Biox Aquaflux ® ) allows the measurement of evaporative skin water loss with a greater measurement capacity (up to 1325 g/h/m 2 ) compared to traditional evaporimeters. The aim of this study was to evaluate the applicability of the Biox Aquaflux ® to estimate sweat production during exercise. Fourteen healthy subjects performed a 20-min cycle ergometer trial at respectively 55% heart rate (HR reserve and 75% HR reserve . Sweat production was estimated by measuring body weight before and after exercise, by calculating the amount of sweat collected in a patch, and by measuring the water flux (in g/h/m 2 ) with the Biox Aquaflux ® instrument. The Biox Aquaflux ® instrument allowed the follow up of sweat kinetics at both intensities. Correlations between the measurement methods were all significant for the 75% HR reserve trial (with r ranging from 0.68 to 0.76) whilst for the 55% HR reserve a significant relation was detected between the patch method and the Biox Aquaflux ® only (with r ranging from 0.41 to 0.79). The Biox Aquaflux ® instrument is a practical and direct method for the estimation of local sweat rates under field conditions. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Estimation of Heartbeat Peak Locations and Heartbeat Rate from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2017-01-01

    Available systems for heartbeat signal estimations from facial video only provide an average of Heartbeat Rate (HR) over a period of time. However, physicians require Heartbeat Peak Locations (HPL) to assess a patient’s heart condition by detecting cardiac events and measuring different physiolog......Available systems for heartbeat signal estimations from facial video only provide an average of Heartbeat Rate (HR) over a period of time. However, physicians require Heartbeat Peak Locations (HPL) to assess a patient’s heart condition by detecting cardiac events and measuring different...... physiological parameters including HR and its variability. This paper proposes a new method of HPL estimation from facial video using Empirical Mode Decomposition (EMD), which provides clearly visible heartbeat peaks in a decomposed signal. The method also provides the notion of both color- and motion-based HR...... from facial videos, even when there are voluntary internal and external head motions in the videos. The employed signal processing technique has resulted in a system that could significantly advance, among others, health-monitoring technologies....

  13. Estimating oxygen consumption from heart rate using adaptive neuro-fuzzy inference system and analytical approaches.

    Science.gov (United States)

    Kolus, Ahmet; Dubé, Philippe-Antoine; Imbeau, Daniel; Labib, Richard; Dubeau, Denise

    2014-11-01

    In new approaches based on adaptive neuro-fuzzy systems (ANFIS) and analytical method, heart rate (HR) measurements were used to estimate oxygen consumption (VO2). Thirty-five participants performed Meyer and Flenghi's step-test (eight of which performed regeneration release work), during which heart rate and oxygen consumption were measured. Two individualized models and a General ANFIS model that does not require individual calibration were developed. Results indicated the superior precision achieved with individualized ANFIS modelling (RMSE = 1.0 and 2.8 ml/kg min in laboratory and field, respectively). The analytical model outperformed the traditional linear calibration and Flex-HR methods with field data. The General ANFIS model's estimates of VO2 were not significantly different from actual field VO2 measurements (RMSE = 3.5 ml/kg min). With its ease of use and low implementation cost, the General ANFIS model shows potential to replace any of the traditional individualized methods for VO2 estimation from HR data collected in the field. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Automating and estimating glomerular filtration rate for dosing medications and staging chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Trinkley KE

    2014-05-01

    Full Text Available Katy E Trinkley,1 S Michelle Nikels,2 Robert L Page II,1 Melanie S Joy11Skaggs School of Pharmacy and Pharmaceutical Sciences, 2School of Medicine, University of Colorado, Aurora, CO, USA Objective: The purpose of this paper is to serve as a review for primary care providers on the bedside methods for estimating glomerular filtration rate (GFR for dosing and chronic kidney disease (CKD staging and to discuss how automated health information technologies (HIT can enhance clinical documentation of staging and reduce medication errors in patients with CKD.Methods: A nonsystematic search of PubMed (through March 2013 was conducted to determine the optimal approach to estimate GFR for dosing and CKD staging and to identify examples of how automated HITs can improve health outcomes in patients with CKD. Papers known to the authors were included, as were scientific statements. Articles were chosen based on the judgment of the authors.Results: Drug-dosing decisions should be based on the method used in the published studies and package labeling that have been determined to be safe, which is most often the Cockcroft–Gault formula unadjusted for body weight. Although Modification of Diet in Renal Disease is more commonly used in practice for staging, the CKD–Epidemiology Collaboration (CKD–EPI equation is the most accurate formula for estimating the CKD staging, especially at higher GFR values. Automated HITs offer a solution to the complexity of determining which equation to use for a given clinical scenario. HITs can educate providers on which formula to use and how to apply the formula in a given clinical situation, ultimately improving appropriate medication and medical management in CKD patients.Conclusion: Appropriate estimation of GFR is key to optimal health outcomes. HITs assist clinicians in both choosing the most appropriate GFR estimation formula and in applying the results of the GFR estimation in practice. Key limitations of the

  15. [Consensus document: recommendations for the use of equations to estimate glomerular filtration rate in children].

    Science.gov (United States)

    Montañés Bermúdez, R; Gràcia Garcia, S; Fraga Rodríguez, G M; Escribano Subias, J; Diez de Los Ríos Carrasco, M J; Alonso Melgar, A; García Nieto, V

    2014-05-01

    The appearance of the K/DOQI guidelines in 2002 on the definition, evaluation and staging of chronic kidney disease (CKD) have led to a major change in how to assess renal function in adults and children. These guidelines, recently updated, recommended that the study of renal function is based, not only on measuring the serum creatinine concentration, but this must be accompanied by the estimation of glomerular filtration rate (GFR) obtained by an equation. However, the implementation of this recommendation in the clinical laboratory reports in the paediatric population has been negligible. Numerous studies have appeared in recent years on the importance of screening and monitoring of patients with CKD, the emergence of new equations for estimating GFR, and advances in clinical laboratories regarding the methods for measuring plasma creatinine and cystatin C, determined by the collaboration between the departments of paediatrics and clinical laboratories to establish recommendations based on the best scientific evidence on the use of equations to estimate GFR in this population. The purpose of this document is to provide recommendations on the evaluation of renal function and the use of equations to estimate GFR in children from birth to 18 years of age. The recipients of these recommendations are paediatricians, nephrologists, clinical biochemistry, clinical analysts, and all health professionals involved in the study and evaluation of renal function in this group of patients. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  16. Distributed Space-Time Block Coded Transmission with Imperfect Channel Estimation: Achievable Rate and Power Allocation

    Directory of Open Access Journals (Sweden)

    Sonia Aïssa

    2008-05-01

    Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.

  17. Technical note: Use of a simplified equation for estimating glomerular filtration rate in beef cattle.

    Science.gov (United States)

    Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K

    2013-11-01

    This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, ageddynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.

  18. Applicability of a different estimation equation of glomerular filtration rate in Turkey.

    Science.gov (United States)

    Altiparmak, Mehmet Riza; Seyahi, Nurhan; Trabulus, Sinan; Yalin, Serkan Feyyaz; Bolayirli, Murat; Andican, Zeynep Gulnur; Suleymanlar, Gultekin; Serdengecti, Kamil

    2013-09-01

    We aimed to investigate the performance of various creatinine based glomerular filtration rate estimation equations that were widely used in clinical practice in Turkey and calculate a correction coefficient to obtain a better estimate using the isotope dilution mass spectrometry (IDMS)-traceable Modification of the Diet in Renal Disease (MDRD) formula. This cross-sectional study included adult (>18 years) outpatients and in patients with chronic kidney disease as well as healthy volunteers. Iohexol clearance was measured and the precisions and bias of the various estimation equations were calculated. A correction coefficient for the IDMS-traceable MDRD was also calculated. A total of 229 (113 male/116 female; mean age 53.9 ± 14.4 years) subjects were examined. A median iohexol clearance of 39.21 mL/min/1.73 m(2) (range: 6.01-168.47 mL/min/1.73 m(2)) was found. Bias and random error for the IDMS-traceable MDRD equation were 11.33 ± 8.97 mL/min/1.73 m(2) and 14.21 mL/min/1.73 m(2), respectively. MDRD formula seems to provide the best estimates. To obtain the best agreement with iohexol clearance, a correction factor of 0.804 must be introduced to IDMS-traceable MDRD equation for our study population.

  19. Methodology for estimating radiation dose rates to freshwater biota exposed to radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Blaylock, B.G.; Frank, M.L.; O`Neal, B.R.

    1993-08-01

    The purpose of this report is to present a methodology for evaluating the potential for aquatic biota to incur effects from exposure to chronic low-level radiation in the environment. Aquatic organisms inhabiting an environment contaminated with radioactivity receive external radiation from radionuclides in water, sediment, and from other biota such as vegetation. Aquatic organisms receive internal radiation from radionuclides ingested via food and water and, in some cases, from radionuclides absorbed through the skin and respiratory organs. Dose rate equations, which have been developed previously, are presented for estimating the radiation dose rate to representative aquatic organisms from alpha, beta, and gamma irradiation from external and internal sources. Tables containing parameter values for calculating radiation doses from selected alpha, beta, and gamma emitters are presented in the appendix to facilitate dose rate calculations. The risk of detrimental effects to aquatic biota from radiation exposure is evaluated by comparing the calculated radiation dose rate to biota to the U.S. Department of Energy`s (DOE`s) recommended dose rate limit of 0.4 mGy h{sup {minus}1} (1 rad d{sup {minus}1}). A dose rate no greater than 0.4 mGy h{sup {minus}1} to the most sensitive organisms should ensure the protection of populations of aquatic organisms. DOE`s recommended dose rate is based on a number of published reviews on the effects of radiation on aquatic organisms that are summarized in the National Council on Radiation Protection and Measurements Report No. 109 (NCRP 1991). DOE recommends that if the results of radiological models or dosimetric measurements indicate that a radiation dose rate of 0. 1 mGy h{sup {minus}1} will be exceeded, then a more detailed evaluation of the potential ecological consequences of radiation exposure to endemic populations should be conducted.

  20. A method applicable to effective dose rate estimates for aircrew dosimetry

    CERN Document Server

    Ferrari, A; Rancati, T

    2001-01-01

    The inclusion of cosmic radiation as occupational exposure under ICRP Publication 60 and the European Union Council Directive 96/29/Euratom has highlighted the need to estimate the exposure of aircrew. According to a report of the Group of Experts established under the terms of Article 31 of the European Treaty, the individual estimates of dose for flights below 15 km may be done using an appropriate computer program. In order to calculate the radiation exposure at aircraft altitudes, calculations have been performed by means of the Monte Carlo transport code FLUKA. On the basis of the calculated results, a simple method is proposed for the individual evaluation of effective dose rate due to the galactic component of cosmic radiation as a function of latitude and altitude. (13 refs).

  1. Temperature-profile methods for estimating percolation rates in arid environments

    Science.gov (United States)

    Constantz, Jim; Tyler, Scott W.; Kwicklis, Edward

    2003-01-01

    Percolation rates are estimated using vertical temperature profiles from sequentially deeper vadose environments, progressing from sediments beneath stream channels, to expansive basin-fill materials, and finally to deep fractured bedrock underlying mountainous terrain. Beneath stream channels, vertical temperature profiles vary over time in response to downward heat transport, which is generally controlled by conductive heat transport during dry periods, or by advective transport during channel infiltration. During periods of stream-channel infiltration, two relatively simple approaches are possible: a heat-pulse technique, or a heat and liquid-water transport simulation code. Focused percolation rates beneath stream channels are examined for perennial, seasonal, and ephemeral channels in central New Mexico, with estimated percolation rates ranging from 100 to 2100 mm d−1 Deep within basin-fill and underlying mountainous terrain, vertical temperature gradients are dominated by the local geothermal gradient, which creates a profile with decreasing temperatures toward the surface. If simplifying assumptions are employed regarding stratigraphy and vapor fluxes, an analytical solution to the heat transport problem can be used to generate temperature profiles at specified percolation rates for comparison to the observed geothermal gradient. Comparisons to an observed temperature profile in the basin-fill sediments beneath Frenchman Flat, Nevada, yielded water fluxes near zero, with absolute values <10 mm yr−1 For the deep vadose environment beneath Yucca Mountain, Nevada, the complexities of stratigraphy and vapor movement are incorporated into a more elaborate heat and water transport model to compare simulated and observed temperature profiles for a pair of deep boreholes. Best matches resulted in a percolation rate near zero for one borehole and 11 mm yr−1 for the second borehole.

  2. The Greenville Fault: preliminary estimates of its long-term creep rate and seismic potential

    Science.gov (United States)

    Lienkaemper, James J.; Barry, Robert G.; Smith, Forrest E.; Mello, Joseph D.; McFarland, Forrest S.

    2013-01-01

    Once assumed locked, we show that the northern third of the Greenville fault (GF) creeps at 2 mm/yr, based on 47 yr of trilateration net data. This northern GF creep rate equals its 11-ka slip rate, suggesting a low strain accumulation rate. In 1980, the GF, easternmost strand of the San Andreas fault system east of San Francisco Bay, produced a Mw5.8 earthquake with a 6-km surface rupture and dextral slip growing to ≥2 cm on cracks over a few weeks. Trilateration shows a 10-cm post-1980 transient slip ending in 1984. Analysis of 2000-2012 crustal velocities on continuous global positioning system stations, allows creep rates of ~2 mm/yr on the northern GF, 0-1 mm/yr on the central GF, and ~0 mm/yr on its southern third. Modeled depth ranges of creep along the GF allow 5-25% aseismic release. Greater locking in the southern two thirds of the GF is consistent with paleoseismic evidence there for large late Holocene ruptures. Because the GF lacks large (>1 km) discontinuities likely to arrest higher (~1 m) slip ruptures, we expect full-length (54-km) ruptures to occur that include the northern creeping zone. We estimate sufficient strain accumulation on the entire GF to produce Mw6.9 earthquakes with a mean recurrence of ~575 yr. While the creeping 16-km northern part has the potential to produce a Mw6.2 event in 240 yr, it may rupture in both moderate (1980) and large events. These two-dimensional-model estimates of creep rate along the southern GF need verification with small aperture surveys.

  3. Estimating Inbreeding Rates in Natural Populations: Addressing the Problem of Incomplete Pedigrees.

    Science.gov (United States)

    Miller, Mark P; Haig, Susan M; Ballou, Jonathan D; Steel, E Ashley

    2017-07-01

    Understanding and estimating inbreeding is essential for managing threatened and endangered wildlife populations. However, determination of inbreeding rates in natural populations is confounded by incomplete parentage information. We present an approach for quantifying inbreeding rates for populations with incomplete parentage information. The approach exploits knowledge of pedigree configurations that lead to inbreeding coefficients of F = 0.25 and F = 0.125, allowing for quantification of Pr(I|k): the probability of observing pedigree I given the fraction of known parents (k). We developed analytical expressions under simplifying assumptions that define properties and behavior of inbreeding rate estimators for varying values of k. We demonstrated that inbreeding is overestimated if Pr(I|k) is not taken into consideration and that bias is primarily influenced by k. By contrast, our new estimator, incorporating Pr(I|k), is unbiased over a wide range of values of k that may be observed in empirical studies. Stochastic computer simulations that allowed complex inter- and intragenerational inbreeding produced similar results. We illustrate the effects that accounting for Pr(I|k) can have in empirical data by revisiting published analyses of Arabian oryx (Oryx leucoryx) and Red deer (Cervus elaphus). Our results demonstrate that incomplete pedigrees are not barriers for quantifying inbreeding in wild populations. Application of our approach will permit a better understanding of the role that inbreeding plays in the dynamics of populations of threatened and endangered species and may help refine our understanding of inbreeding avoidance mechanisms in the wild. Published by Oxford University Press on behalf of The American Genetic Association 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. High Frame Rate Vector Velocity Estimation using Plane Waves and Transverse Oscillation

    DEFF Research Database (Denmark)

    Jensen, Jonas; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    is obtained by filtering the beamformed RF images in the Fourier domain using a Gaussian filter centered at a desired oscillation frequency. Performance of the method is quantified through measurements with the experimental scanner SARUS and the BK 2L8 linear array transducer. Constant parabolic flow......This paper presents a method for estimating 2-D vector velocities using plane waves and transverse oscillation. The approach uses emission of a low number of steered plane waves, which result in a high frame rate and continuous acquisition of data for the whole image. A transverse oscillating field...

  5. Methodology for Estimating Radiation Dose Rates to Freshwater Biota Exposed to Radionuclides in the Environment

    Energy Technology Data Exchange (ETDEWEB)

    Blaylock, B.G.

    1993-01-01

    The purpose of this report is to present a methodology for evaluating the potential for aquatic biota to incur effects from exposure to chronic low-level radiation in the environment. Aquatic organisms inhabiting an environment contaminated with radioactivity receive external radiation from radionuclides in water, sediment, and from other biota such as vegetation. Aquatic organisms receive internal radiation from radionuclides ingested via food and water and, in some cases, from radionuclides absorbed through the skin and respiratory organs. Dose rate equations, which have been developed previously, are presented for estimating the radiation dose rate to representative aquatic organisms from alpha, beta, and gamma irradiation from external and internal sources. Tables containing parameter values for calculating radiation doses from selected alpha, beta, and gamma emitters are presented in the appendix to facilitate dose rate calculations. The risk of detrimental effects to aquatic biota from radiation exposure is evaluated by comparing the calculated radiation dose rate to biota to the U.S. Department of Energy's (DOE's) recommended dose rate limit of 0.4 mGy h{sup -1} (1 rad d{sup -1}). A dose rate no greater than 0.4 mGy h{sup -1} to the most sensitive organisms should ensure the protection of populations of aquatic organisms. DOE's recommended dose rate is based on a number of published reviews on the effects of radiation on aquatic organisms that are summarized in the National Council on Radiation Protection and Measurements Report No. 109 (NCRP 1991). The literature identifies the developing eggs and young of some species of teleost fish as the most radiosensitive organisms. DOE recommends that if the results of radiological models or dosimetric measurements indicate that a radiation dose rate of 0.1 mGy h{sup -1} will be exceeded, then a more detailed evaluation of the potential ecological consequences of radiation exposure to endemic

  6. Race Adjustment for Estimating Glomerular Filtration Rate Is Not Always Necessary

    Directory of Open Access Journals (Sweden)

    Juliana A. Zanocco

    2012-12-01

    Full Text Available Background: Estimated glomerular filtration rate (eGFR is very important in clinical practice, although it is not adequately tested in different populations. We aimed at establishing the best eGFR formulas for a Brazilian population with emphasis on the need for race correction. Methods: We evaluated 202 individuals with chronic kidney disease (CKD and 42 without previously known renal lesions that were additionally screened by urinalysis. Serum creatinine and plasma clearance of iohexol were measured in all cases. GFR was estimated by the Mayo Clinic, abbreviated Modification of Diet in Renal Disease (MDRD and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI formulas, and creatinine clearance was estimated by the Cockcroft-Gault (CG formula. Plasma clearance of iohexol was used as the gold standard for GFR determination and for the development of a Brazilian formula (BreGFR. Results: Measured and estimated GFR were compared in 244 individuals, 57% female, with a mean age of 41 years (range 18–82. Estimates of intraclass correlation coefficients among the plasma clearance of iohexol and eGFR formulas were all significant (p Conclusions: All cited eGFR formulas showed a good correlation with the plasma clearance of iohexol in the healthy and diseased conditions. The formulas that best detected reduced eGFR were the BreGFR, CKD-EPI, and CKD-EPI1 formulas. Notably, the race correction included in the MDRD and CKD-EPI formulas was not necessary for this population, as it did not contribute to more accurate results.

  7. [Estimation of the glomerular filtration rate in 2014 by tests and equations: strengths and weaknesses].

    Science.gov (United States)

    Hougardy, J M; Delanaye, P; Le Moine, A; Nortier, J

    2014-09-01

    The accurate estimation of the glomerular filtration rate (GFR) is a goal of multiple interests regarding clinical, research and public health aspects. The strong relationship between progressive loss of renal function and mortality underlines the need for early diagnosis and close follow-up of renal diseases. Creatinine is the commonest biomarker of GFR in use. By reason of non-renal determinants of GFR, it is required to integrate creatinine values within equations that take in account its most important determinants (i.e., age, sex). The CKD-EPI 2009 equation is now recommended as the first line equation to estimate GFR within the general population. In this indication, it should replace MDRD that tends to overestimate the prevalence of stage 3 chronic kidney disease with GFR around 60 ml/min. However, many questions remain about the accuracy of GFR equations in specific situations such as extremes of age or body weight. The identification of new biomarkers, less determined by non-renal determinants, is of importance. Among these biomarkers, cystatin-C is more accurate to estimate GFR when it is combined to creatinine (i.e., equation CKD-EPI 2012). However the indica. tions for using cystatin-C instead of creatinine alone are still unclear and its use remains limited in routine practice. In conclusion, neither biomarker nor equation gives an accurate estimation for the whole range of GFR and for all patient populations. Limits of prediction are relying on both biomarker's properties and the range of GFR that is concerned, but also rely on the measurement methods. Therefore, it is crucial to interpret the estimated GFR according to the strengths and weaknesses of the equation in use.

  8. The perception of visible speech: estimation of speech rate and detection of time reversals.

    Science.gov (United States)

    Viviani, Paolo; Figliozzi, Francesca; Lacquaniti, Francesco

    2011-11-01

    Four experiments investigated the perception of visible speech. Experiment 1 addressed the perception of speech rate. Observers were shown video-clips of the lower face of actors speaking at their spontaneous rate. Then, they were shown muted versions of the video-clips, which were either accelerated or decelerated. The task (scaling) was to compare visually the speech rate of the stimulus to the spontaneous rate of the actor being shown. Rate estimates were accurate when the video-clips were shown in the normal direction (forward mode). In contrast, speech rate was underestimated when the video-clips were shown in reverse (backward mode). Experiments 2-4 (2AFC) investigated how accurately one discriminates forward and backward speech movements. Unlike in Experiment 1, observers were never exposed to the sound track of the video-clips. Performance was well above chance when playback mode was crossed with rate modulation, and the number of repetitions of the stimuli allowed some amount of speechreading to take place in forward mode (Experiment 2). In Experiment 3, speechreading was made much more difficult by using a different and larger set of muted video-clips. Yet, accuracy decreased only slightly with respect to Experiment 2. Thus, kinematic rather then speechreading cues are most important for discriminating movement direction. Performance worsened, but remained above chance level when the same stimuli of Experiment 3 were rotated upside down (Experiment 4). We argue that the results are in keeping with the hypothesis that visual perception taps into implicit motor competence. Thus, lawful instances of biological movements (forward stimuli) are processed differently from backward stimuli representing movements that the observer cannot perform.

  9. The research on aging failure rate and optimization estimation of protective relay under haze conditions

    Science.gov (United States)

    Wang, Ying-kang; Zhou, Meng-ran; Yang, Jie; Zhou, Pei-qiang; Xie, Ying

    2017-01-01

    In the fog and haze, the air contains large amounts of H2S, SO2, SO3 and other acids, air conductivity is greatly improved, the relative humidity is also greatly increased, Power transmission lines and electrical equipment in such an environment will increase in the long-running failure ratedecrease the sensitivity of the detection equipment, impact protection device reliability. Weibull distribution is widely used in component failure distribution fitting. It proposes a protection device aging failure rate estimation method based on the least squares method and the iterative method,.Combined with a regional power grid statistics, computing protective equipment failure rate function. Binding characteristics of electrical equipment operation status under haze conditions, optimization methods, get more in line with aging protection equipment failure under conditions of haze characteristics.

  10. Efficient noise-tolerant estimation of heart rate variability using single-channel photoplethysmography.

    Science.gov (United States)

    Firoozabadi, Reza; Helfenbein, Eric D; Babaeizadeh, Saeed

    2017-08-18

    The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is SDSD and RMSSD. Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Robust respiration rate estimation using adaptive Kalman filtering with textile ECG sensor and accelerometer.

    Science.gov (United States)

    Lepine, Nicholas N; Tajima, Takuro; Ogasawara, Takayuki; Kasahara, Ryoichi; Koizumi, Hiroshi

    2016-08-01

    An adaptive Kalman filter-based fusion algorithm capable of estimating respiration rate for unobtrusive respiratory monitoring is proposed. Using both signal characteristics and a priori information, the Kalman filter is adaptively optimized to improve accuracy. Furthermore, the system is able to combine the respiration-related signals extracted from a textile ECG sensor and an accelerometer to create a single robust measurement. We measured derived respiratory rates and, when compared to a reference, found root-mean-square error of 2.11 breaths-per-minute (BrPM) while lying down, 2.30 BrPM while sitting, 5.97 BrPM while walking, and 5.98 BrPM while running. These results demonstrate that the proposed system is applicable to unobtrusive monitoring for various applications.

  12. Fault slip rate estimates for southwestern US from GPS data and non-block viscoelastic sheet

    Science.gov (United States)

    Chuang, R. Y.; Johnson, K. M.

    2012-12-01

    Fault slip rate estimates from geodetic data are becoming increasingly important for earthquake hazard studies. In order to estimate fault slip rates, GPS-constrained kinematic models such as elastic block models are widely used. However, kinematic block models are inherently non-unique and provide limited insight into the mechanics of deformation. Furthermore, assumed discrete tectonic blocks may not exist everywhere as not every region of the western US displays mature, through-going geologic structures that naturally divide the crust into tectonic blocks. For example, the eastern California shear zone and regions of the Basin and Range Province are best described as broad zones of interacting, discontinuous fault strands. We are building towards mechanical models of present-day surface motions in which deformation is a response to plate boundary forces, gravitational loading, and rheological properties of the lithosphere. To model fault slip rates in the southwestern US, we populate an elastico-visco thin sheet (plane stress) with thin viscous shear zones (faults) and impose far-field plate motions and gravitational loading to compute the long-term fault slip rates and crustal motions. Interseismic deformation due to locking of faults is modeled with backslip on dislocations in an elastic half-space or in an elastic plate over a viscoelastic half-space. The total present-day deformation field (long-term plus intersesismic) is compared with the GPS-derived velocity field and the model stress tensor is compared with the stress state inferred from stress-inversions of focal mechanism data.

  13. Estimating marginal CO{sub 2} emissions rates for national electricity systems

    Energy Technology Data Exchange (ETDEWEB)

    Hawkes, A.D. [Grantham Institute for Climate Change, Imperial College London, Exhibition Rd., London SW7 2AZ (United Kingdom)

    2010-10-15

    The carbon dioxide (CO{sub 2}) emissions reduction afforded by a demand-side intervention in the electricity system is typically assessed by means of an assumed grid emissions rate, which measures the CO{sub 2} intensity of electricity not used as a result of the intervention. This emissions rate is called the 'marginal emissions factor' (MEF). Accurate estimation of MEFs is crucial for performance assessment because their application leads to decisions regarding the relative merits of CO{sub 2} reduction strategies. This article contributes to formulating the principles by which MEFs are estimated, highlighting the strengths and weaknesses in existing approaches, and presenting an alternative based on the observed behaviour of power stations. The case of Great Britain is considered, demonstrating an MEF of 0.69 kgCO{sub 2}/kW h for 2002-2009, with error bars at +/-10%. This value could reduce to 0.6 kgCO{sub 2}/kW h over the next decade under planned changes to the underlying generation mix, and could further reduce to approximately 0.51 kgCO{sub 2}/kW h before 2025 if all power stations commissioned pre-1970 are replaced by their modern counterparts. Given that these rates are higher than commonly applied system-average or assumed 'long term marginal' emissions rates, it is concluded that maintenance of an improved understanding of MEFs is valuable to better inform policy decisions. (author)

  14. Factors influencing ascertainment bias of microsatellite allele sizes: impact on estimates of mutation rates.

    Science.gov (United States)

    Li, Biao; Kimmel, Marek

    2013-10-01

    Microsatellite loci play an important role as markers for identification, disease gene mapping, and evolutionary studies. Mutation rate, which is of fundamental importance, can be obtained from interspecies comparisons, which, however, are subject to ascertainment bias. This bias arises, for example, when a locus is selected on the basis of its large allele size in one species (cognate species 1), in which it is first discovered. This bias is reflected in average allele length in any noncognate species 2 being smaller than that in species 1. This phenomenon was observed in various pairs of species, including comparisons of allele sizes in human and chimpanzee. Various mechanisms were proposed to explain observed differences in mean allele lengths between two species. Here, we examine the framework of a single-step asymmetric and unrestricted stepwise mutation model with genetic drift. Analysis is based on coalescent theory. Analytical results are confirmed by simulations using the simuPOP software. The mechanism of ascertainment bias in this model is a tighter correlation of allele sizes within a cognate species 1 than of allele sizes in two different species 1 and 2. We present computations of the expected average allele size difference, given the mutation rate, population sizes of species 1 and 2, time of separation of species 1 and 2, and the age of the allele. We show that when the past demographic histories of the cognate and noncognate taxa are different, the rate and directionality of mutations affect the allele sizes in the two taxa differently from the simple effect of ascertainment bias. This effect may exaggerate or reverse the effect of difference in mutation rates. We reanalyze literature data, which indicate that despite the bias, the microsatellite mutation rate estimate in the ancestral population is consistently greater than that in either human or chimpanzee and the mutation rate estimate in human exceeds or equals that in chimpanzee with the rate

  15. Antiadhesive effect of mixed solution of sodium hyaluronate and sodium carboxymethylcellulose after blow-out fracture repair.

    Science.gov (United States)

    Lee, Jong Mi; Baek, Sehyun

    2012-11-01

    Treatment of blow-out fractures is aimed at the prevention of permanent diplopia and cosmetically unacceptable enophthalmos. Porous polyethylene sheets are one of the most common alloplastic implants for blow-out fracture repair. Because adhesion between the porous polyethylene and the orbital soft tissue can result in restrictions of ocular motility, prevention of postoperative adhesion is important in the reconstruction of blow-out fractures. The purpose of this study was to find out the effect of the mixed solution of sodium hyaluronate and sodium carboxymethylcellulose (HACMC) on postoperative adhesion in blow-out fracture repair in an animal model.Twenty-four New Zealand white rabbits were used. An 8-mm defect was made in the maxillary sinuses including the bone and mucosa. A 10-mm porous polyethylene sheet (Medpor; Porex Surgical Inc., Newnan, GA) was inserted in to the defect. The rabbits were divided into a control group and a HACMC group. In the HACMC group, HACMC solution was instilled onto the surface of the implant and then the implant was inserted. The implants were harvested at 1, 2, 4, and 8 weeks after surgery (3 implants each period). Hematoxylin and eosin, Masson trichrome, and CD31 (platelet endothelial cell adhesion molecule-1) stains were performed for evaluation of inflammation, fibrosis, and vascularization.Inflammation appeared less severe in the HACMC group, but the difference between the 2 groups was not statistically significant. The degree of fibrosis was more severe in the control group. There were significant differences in the degree of fibrosis between the 2 groups 4 and 8 weeks after surgery (P = 0.046). The amount of vascularization was similar in both groups.The HACMC solution seemed to be effective for reducing postoperative adhesion in reconstruction of blow-out fractures in a rabbit model. Our results suggest that the application of HACMC solution could be an effective adjunct for the repair of trap-door fractures or revision

  16. Real-time data for estimating a forward-looking interest rate rule of the ECB.

    Science.gov (United States)

    Bletzinger, Tilman; Wieland, Volker

    2017-12-01

    The purpose of the data presented in this article is to use it in ex post estimations of interest rate decisions by the European Central Bank (ECB), as it is done by Bletzinger and Wieland (2017) [1]. The data is of quarterly frequency from 1999 Q1 until 2013 Q2 and consists of the ECB's policy rate, inflation rate, real output growth and potential output growth in the euro area. To account for forward-looking decision making in the interest rate rule, the data consists of expectations about future inflation and output dynamics. While potential output is constructed based on data from the European Commission's annual macro-economic database, inflation and real output growth are taken from two different sources both provided by the ECB: the Survey of Professional Forecasters and projections made by ECB staff. Careful attention was given to the publication date of the collected data to ensure a real-time dataset only consisting of information which was available to the decision makers at the time of the decision.

  17. Reaction rate estimation of controlled-release antifouling paint binders: Rosin-based systems

    DEFF Research Database (Denmark)

    Meseguer Yebra, Diego; Kiil, Søren; Dam-Johansen, Kim

    2005-01-01

    accuracies. The latter is important because very low steady state reaction rates (about 0.70 +/- 0.26 mu g Zn(2+)cm(-2)day(-1) at 25 degrees C and pH 8.2) are measured. Steady state reaction rates of Cu2+- and Mg2+ -derivatives are also determined and discussed. The experimental procedures developed are used...... to the hydroxide ion concentration, a, is 0.86 +/- 0.42. L-znR is the estimated solubility product of the ZnR resin which has a value of 3.1 x 10(-12) (mol/l)(-3) (about 6 mg Zn2+/l in equilibrium). The low value of the activation energy is believed to result from the complex reaction mechanisms hypothesised......+ for Cu2+ in the resin structure during paint dispersion and immersion results in a lower reaction rate compared to the pure ZnR. Cu-carboxylate has a reaction rate of about 5.8 +/- 1.0 mu g CuR cm(-2) day(-1) at 25 degrees C and pH 8.2. The presence of Mg and Na compounds (probably Mg- and Na...

  18. The Gulf of Mexico ecosystem, six years after the Macondo oil well blowout

    Science.gov (United States)

    Joye, Samantha B.; Bracco, Annalisa; Özgökmen, Tamay M.; Chanton, Jeffrey P.; Grosell, Martin; MacDonald, Ian R.; Cordes, Erik E.; Montoya, Joseph P.; Passow, Uta

    2016-07-01

    The Gulf of Mexico ecosystem is a hotspot for biological diversity and supports a number of industries, from tourism to fishery production to oil and gas exploration, that serve as the economic backbone of Gulf coast states. The Gulf is a natural hydrocarbon basin, rich with stores of oil and gas that lie in reservoirs deep beneath the seafloor. The natural seepage of hydrocarbons across the Gulf system is extensive and, thus, the system's biological components experience ephemeral, if not, frequent, hydrocarbon exposure. In contrast to natural seepage, which is diffuse and variable over space and time, the 2010 Macondo oil well blowout, represented an intense, focused hydrocarbon infusion to the Gulf's deepwaters. The Macondo blowout drove rapid shifts in microbial populations and activity, revealed unexpected phenomena, such as deepwater hydrocarbon plumes and marine "oil snow" sedimentation, and impacted the Gulf's pelagic and benthic ecosystems. Understanding the distribution and fate of Macondo oil was limited to some degree by an insufficient ability to predict the physical movement of water in the Gulf. In other words, the available physical oceanographic models lacked critical components. In the past six years, much has been learned about the physical oceanography of the Gulf, providing transformative knowledge that will improve the ability to predict the movement of water and the hydrocarbons they carry in future blowout scenarios. Similarly, much has been learned about the processing and fate of Macondo hydrocarbons. Here, we provide an overview of the distribution, fate and impacts of Macondo hydrocarbons and offer suggestions for future research to push the field of oil spill response research forward.

  19. Clinical usefulness of helical CT three-dimensional images in blow-out fractures

    Energy Technology Data Exchange (ETDEWEB)

    Takeno, Naokazu; Honda, Kouichi; Ohiwa, Akira [Keiseigeka Memorial Hospital, Sapporo, Hokkaido (Japan); Miyashita, Souji; Sugihara, Tsuneki; Funayama, Emi; Maeda, Kazuhiko

    1996-12-01

    Since the bone of the orbital floor is very thin, it was previously impossible to visualize bone deficits and bone fracture lines of the orbital floor accurately in three-dimensional (3D) CT images. Since the analytical ability of CT and 3D image reconstitution have improved, however, CT imaging of thin bone has become possible. We used CT photography and 3D image reconstruction in 19 cases before surgery to correct blow-out fracture. We used the Toshiba Corporation helical CT system (X-Vigor, version 5.0A) and filmed an area 3-4 cm wide in the orbital region, mainly on the orbital floor. The X-Tension workstation (Toshiba) was used for 3D image reconstruction. All cases were filmed in 0.5-mm slices. CT scanning time was approximately 60 seconds. There were very few artifacts in the 3D images. Small bone deficits and minute bone fracture lines could be observed on the 3D images, and it was also possible to change the visual angle. The CT images were faithful, their reliability was good, and they were clinically useful. In 8 cases in which a silicon sheet was used for reconstruction of the orbital floor, 3D images were also used for postoperative evaluation. The silicon sheets used to reconstruct bone deficits could be selectively viewed on the image and their condition determined 3D CT images are therefore considered useful in both pre- and postoperative evaluation of blow-out fractures. We have utilized these 3D CT images in the evaluation of zygomatic bone and other facial fractures in addition to blow-out fractures. (author)

  20. A Multiwavelength Approach to the Star Formation Rate Estimation in Galaxies at Intermediate Redshifts

    Science.gov (United States)

    Cardiel, N.; Elbaz, D.; Schiavon, R. P.; Willmer, C. N. A.; Koo, D. C.; Phillips, A. C.; Gallego, J.

    2003-02-01

    We use a sample of seven starburst galaxies at intermediate redshifts (z~0.4 and 0.8) with observations ranging from the observed ultraviolet to 1.4 GHz, to compare the star formation rate (SFR) estimators that are used in the different wavelength regimes. We find that extinction-corrected Hα underestimates the SFR, and the degree of this underestimation increases with the infrared luminosity of the galaxies. Galaxies with very different levels of dust extinction as measured with SFRIR/SFR(Hα, uncorrected for extinction) present a similar attenuation A[Hα], as if the Balmer lines probed a different region of the galaxy than the one responsible for the bulk of the IR luminosity for large SFRs. In addition, SFR estimates derived from [O II] λ3727 match very well those inferred from Hα after applying the metallicity correction derived from local galaxies. SFRs estimated from the UV luminosities show a dichotomic behavior, similar to that previously reported by other authors in galaxies at zfinancial support of the W. M. Keck Foundation. Based in part on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. Based in part on observations with the Infrared Space Observatory (ISO), an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, Netherlands, and United Kingdom) with the participation of ISAS and NASA.

  1. Numerical Estimation of Spectral Properties of Laser Based on Rate Equations

    Directory of Open Access Journals (Sweden)

    Jan Litvik

    2016-01-01

    Full Text Available Laser spectral properties are essential to evaluate the performance of optical communication systems. In general, the power spectral density of the phase noise has a crucial impact on spectral properties of the unmodulated laser signal. Here the white Gaussian noise and 1/f-noise are taken into the consideration. By utilizing the time-dependent realizations of the instantaneous optical power and the phase simultaneously, it is possible to estimate the power spectral density or alternatively the power spectrum of an unmodulated laser signal shifted to the baseband and thus estimate the laser linewidth. In this work, we report on the theoretical approach to analyse unmodulated real-valued high-frequency stationary random passband signal of laser, followed by presenting the numerical model of the distributed feedback laser to emulate the time-dependent optical power and the instantaneous phase, as two important time domain laser attributes. The laser model is based on numerical solving the rate equations using fourth-order Runge-Kutta method. This way, we show the direct estimation of the power spectral density and the laser linewidth, when time-dependent laser characteristics are known.

  2. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    Directory of Open Access Journals (Sweden)

    Jie-Hao Chen

    2017-01-01

    Full Text Available In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR model by 5.57% and Support Vector Regression (SVR model by 5.80%.

  3. Estimation of Leak Flow Rate during Post-LOCA Using Cascaded Fuzzy Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Yeong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    In this study, important parameters such as the break position, size, and leak flow rate of loss of coolant accidents (LOCAs), provide operators with essential information for recovering the cooling capability of the nuclear reactor core, for preventing the reactor core from melting down, and for managing severe accidents effectively. Leak flow rate should consist of break size, differential pressure, temperature, and so on (where differential pressure means difference between internal and external reactor vessel pressure). The leak flow rate is strongly dependent on the break size and the differential pressure, but the break size is not measured and the integrity of pressure sensors is not assured in severe circumstances. In this paper, a cascaded fuzzy neural network (CFNN) model is appropriately proposed to estimate the leak flow rate out of break, which has a direct impact on the important times (time approaching the core exit temperature that exceeds 1200 .deg. F, core uncover time, reactor vessel failure time, etc.). The CFNN is a data-based model, it requires data to develop and verify itself. Because few actual severe accident data exist, it is essential to obtain the data required in the proposed model using numerical simulations. In this study, a CFNN model was developed to predict the leak flow rate before proceeding to severe LOCAs. The simulations showed that the developed CFNN model accurately predicted the leak flow rate with less error than 0.5%. The CFNN model is much better than FNN model under the same conditions, such as the same fuzzy rules. At the result of comparison, the RMS errors of the CFNN model were reduced by approximately 82 ~ 97% of those of the FNN model.

  4. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    Science.gov (United States)

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  5. Automatic Reporting of Creatinine-Based Estimated Glomerular Filtration Rate in Children: Is this Feasible?

    Directory of Open Access Journals (Sweden)

    Andrew Lunn

    2016-07-01

    Full Text Available Creatinine, although widely used as a biomarker to measure renal function, has long been known as an insensitive marker of renal impairment. Patients with reduced renal function can have a creatinine level within the normal range, with a rapid rise when renal function is significantly reduced. As of 1976, the correlation between height, the reciprocal of creatinine, and measured glomerular filtration rate (GFR in children has been described. It has been used to derive a simple formula for estimated glomerular filtration rate (eGFR that could be used at the bedside as a more sensitive method of identifying children with renal impairment. Formulae based on this association, with modifications over time as creatinine assay methods have changed, are still widely used clinically at the bedside and in research studies to assess the degree of renal impairment in children. Adult practice has moved in many countries to computer-generated results that report eGFR alongside creatinine results using more complex, but potentially more accurate estimates of GFR, which are independent of height. This permits early identification of patients with chronic kidney disease. This review assesses the feasibility of automated reporting of eGFR and the advantages and disadvantages of this in children.

  6. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    Gong, R [Georgia Institute of Technology; Lu, C [Georgia Institute of Technology; Luo, Jian [Georgia Institute of Technology; Wu, Wei-min [Stanford University; Cheng, H. [Stanford University; Criddle, Craig [Stanford University; Kitanidis, Peter K. [Stanford University; Gu, Baohua [ORNL; Watson, David B [ORNL; Jardine, Philip M [ORNL; Brooks, Scott C [ORNL

    2011-03-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.

  7. A kinetic model for estimating net photosynthetic rates of cos lettuce leaves under pulsed light.

    Science.gov (United States)

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2015-04-01

    Time-averaged net photosynthetic rate (P n) under pulsed light (PL) is known to be affected by the PL frequency and duty ratio, even though the time-averaged photosynthetic photon flux density (PPFD) is unchanged. This phenomenon can be explained by considering that photosynthetic intermediates (PIs) are pooled during light periods and then consumed by partial photosynthetic reactions during dark periods. In this study, we developed a kinetic model to estimate P n of cos lettuce (Lactuca sativa L. var. longifolia) leaves under PL based on the dynamics of the amount of pooled PIs. The model inputs are average PPFD, duty ratio, and frequency; the output is P n. The rates of both PI accumulation and consumption at a given moment are assumed to be dependent on the amount of pooled PIs at that point. Required model parameters and three explanatory variables (average PPFD, frequency, and duty ratio) were determined for the simulation using P n values under PL based on several combinations of the three variables. The model simulation for various PL levels with a wide range of time-averaged PPFDs, frequencies, and duty ratios further demonstrated that P n under PL with high frequencies and duty ratios was comparable to, but did not exceed, P n under continuous light, and also showed that P n under PL decreased as either frequency or duty ratio was decreased. The developed model can be used to estimate P n under various light environments where PPFD changes cyclically.

  8. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  9. Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters

    Science.gov (United States)

    Liu, Kwang-Ming; Chin, Chien-Pang; Chen, Chun-Hui; Chang, Jui-Han

    2015-01-01

    The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ’) and the vital parameters, maximum age (Tmax), age at maturity (Tm), annual fecundity (f/Rc)), size at birth (Lb), size at maturity (Lm), and asymptotic length (L∞). Group (1) included species with slow growth rates (0.034 yr-1 thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ’ values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures. PMID:26576058

  10. Large deviations estimates for the multiscale analysis of heart rate variability

    Science.gov (United States)

    Loiseau, Patrick; Médigue, Claire; Gonçalves, Paulo; Attia, Najmeddine; Seuret, Stéphane; Cottin, François; Chemla, Denis; Sorine, Michel; Barral, Julien

    2012-11-01

    In the realm of multiscale signal analysis, multifractal analysis provides a natural and rich framework to measure the roughness of a time series. As such, it has drawn special attention of both mathematicians and practitioners, and led them to characterize relevant physiological factors impacting the heart rate variability. Notwithstanding these considerable progresses, multifractal analysis almost exclusively developed around the concept of Legendre singularity spectrum, for which efficient and elaborate estimators exist, but which are structurally blind to subtle features like non-concavity or, to a certain extent, non scaling of the distributions. Large deviations theory allows bypassing these limitations but it is only very recently that performing estimators were proposed to reliably compute the corresponding large deviations singularity spectrum. In this article, we illustrate the relevance of this approach, on both theoretical objects and on human heart rate signals from the Physionet public database. As conjectured, we verify that large deviations principles reveal significant information that otherwise remains hidden with classical approaches, and which can be reminiscent of some physiological characteristics. In particular we quantify the presence/absence of scale invariance of RR signals.

  11. Variability of glomerular filtration rate estimation equations in elderly Chinese patients with chronic kidney disease.

    Science.gov (United States)

    Liu, Xun; Cheng, Mu-hua; Shi, Cheng-gang; Wang, Cheng; Cheng, Cai-lian; Chen, Jin-xia; Tang, Hua; Chen, Zhu-jiang; Ye, Zeng-chun; Lou, Tan-qi

    2012-01-01

    Chronic kidney disease (CKD) is recognized worldwide as a public health problem, and its prevalence increases as the population ages. However, the applicability of formulas for estimating the glomerular filtration rate (GFR) based on serum creatinine (SC) levels in elderly Chinese patients with CKD is limited. Based on values obtained with the technetium-99m diethylenetriaminepentaacetic acid ((99m)Tc-DTPA) renal dynamic imaging method, 319 elderly Chinese patients with CKD were enrolled in this study. Serum creatinine was determined by the enzymatic method. The GFR was estimated using the Cockroft-Gault (CG) equation, the Modification of Diet in Renal Disease (MDRD) equations, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation, the Jelliffe-1973 equation, and the Hull equation. The median of difference ranged from -0.3-4.3 mL/min/1.73 m(2). The interquartile range (IQR) of differences ranged from 13.9-17.6 mL/min/1.73 m(2). Accuracy with a deviation less than 15% ranged from 27.6%-32.9%. Accuracy with a deviation less than 30% ranged from 53.6%-57.7%. Accuracy with a deviation less than 50% ranged from 74.9%-81.5%. None of the equations had accuracy up to the 70% level with a deviation less than 30% from the standard glomerular filtration rate (sGFR). Bland-Altman analysis demonstrated that the mean difference ranged from -3.0-2.4 mL/min/1.73 m(2). However, the agreement limits of all the equations, except the CG equation, exceeded the prior acceptable tolerances defined as 60 mL/min/1.73 m(2). When the overall performance and accuracy were compared in different stages of CKD, GFR estimated using the CG equation showed promising results. Our study indicated that none of these equations were suitable for estimating GFR in the elderly Chinese population investigated. At present, based on overall performance, as well as performance in different CKD stages, the CG equation may be the most accurate for estimating GFR in elderly Chinese patients

  12. Light traps fail to estimate reliable malaria mosquito biting rates on Bioko Island, Equatorial Guinea

    Directory of Open Access Journals (Sweden)

    Overgaard Hans J

    2012-02-01

    Full Text Available Abstract Background The human biting rate (HBR, an important parameter for assessing malaria transmission and evaluating vector control interventions, is commonly estimated by human landing collections (HLC. Although intense efforts have been made to find alternative non-exposure mosquito collection methods, HLC remains the standard for providing reliable and consistent HBRs. The aim of this study was to assess the relationship between human landing and light trap collections (LTC, in an attempt to estimate operationally feasible conversion factors between the two. The study was conducted as part of the operational research component of the Bioko Island Malaria Control Project (BIMCP, Equatorial Guinea. Methods Malaria mosquitoes were collected indoors and outdoors by HLCs and LTCs in three villages on Bioko Island, Equatorial Guinea during five bimonthly collections in 2009. Indoor light traps were suspended adjacent to occupied long-lasting, insecticide-treated bed nets. Outdoor light traps were placed close to the outer wall under the roof of the collection house. Collected specimens were subjected to DNA extraction and diagnostic PCR to identify species within the Anopheles gambiae complex. Data were analysed by simple regression of log-transformed values and by Bayesian regression analysis. Results There was a poor correlation between the two collection methods. Results varied by location, venue, month, house, but also by the statistical method used. The more robust Bayesian analyses indicated non-linear relationships and relative sampling efficiencies being density dependent for the indoor collections, implying that straight-forward and simple conversion factors could not be calculated for any of the locations. Outdoor LTC:HLC relationships were weak, but could be estimated at 0.10 and 0.07 for each of two locations. Conclusions Light trap collections in combination with bed nets are not recommended as a reliable method to assess human

  13. A practical way to estimate retail tobacco sales violation rates more accurately.

    Science.gov (United States)

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  14. Blowout jets and impulsive eruptive flares in a bald-patch topology

    Science.gov (United States)

    Chandra, R.; Mandrini, C. H.; Schmieder, B.; Joshi, B.; Cristiani, G. D.; Cremades, H.; Pariat, E.; Nuevo, F. A.; Srivastava, A. K.; Uddin, W.

    2017-02-01

    Context. A subclass of broad extreme ultraviolet (EUV) and X-ray jets, called blowout jets, have become a topic of research since they could be the link between standard collimated jets and coronal mass ejections (CMEs). Aims: Our aim is to understand the origin of a series of broad jets, some of which are accompanied by flares and associated with narrow and jet-like CMEs. Methods: We analyze observations of a series of recurrent broad jets observed in AR 10484 on 21-24 October 2003. In particular, one of them occurred simultaneously with an M2.4 flare on 23 October at 02:41 UT (SOLA2003-10-23). Both events were observed by the ARIES Hα Solar Tower-Telescope, TRACE, SOHO, and RHESSI instruments. The flare was very impulsive and followed by a narrow CME. A local force-free model of AR 10484 is the basis to compute its topology. We find bald patches (BPs) at the flare site. This BP topology is present for at least two days before to events. Large-scale field lines, associated with the BPs, represent open loops. This is confirmed by a global potential free source surface (PFSS) model. Following the brightest leading edge of the Hα and EUV jet emission, we can temporarily associate these emissions with a narrow CME. Results: Considering their characteristics, the observed broad jets appear to be of the blowout class. As the most plausible scenario, we propose that magnetic reconnection could occur at the BP separatrices forced by the destabilization of a continuously reformed flux rope underlying them. The reconnection process could bring the cool flux-rope material into the reconnected open field lines driving the series of recurrent blowout jets and accompanying CMEs. Conclusions: Based on a model of the coronal field, we compute the AR 10484 topology at the location where flaring and blowout jets occurred from 21 to 24 October 2003. This topology can consistently explain the origin of these events. The movie associated to Fig. 1 is available at http://www.aanda.org

  15. Estimating the dust production rate of carbon stars in the Small Magellanic Cloud

    Science.gov (United States)

    Nanni, Ambra; Marigo, Paola; Girardi, Léo; Rubele, Stefano; Bressan, Alessandro; Groenewegen, Martin A. T.; Pastorelli, Giada; Aringer, Bernhard

    2018-02-01

    We employ newly computed grids of spectra reprocessed by dust for estimating the total dust production rate (DPR) of carbon stars in the Small Magellanic Cloud (SMC). For the first time, the grids of spectra are computed as a function of the main stellar parameters, i.e. mass-loss rate, luminosity, effective temperature, current stellar mass and element abundances at the photosphere, following a consistent, physically grounded scheme of dust growth coupled with stationary wind outflow. The model accounts for the dust growth of various dust species formed in the circumstellar envelopes of carbon stars, such as carbon dust, silicon carbide and metallic iron. In particular, we employ some selected combinations of optical constants and grain sizes for carbon dust that have been shown to reproduce simultaneously the most relevant colour-colour diagrams in the SMC. By employing our grids of models, we fit the spectral energy distributions of ≈3100 carbon stars in the SMC, consistently deriving some important dust and stellar properties, i.e. luminosities, mass-loss rates, gas-to-dust ratios, expansion velocities and dust chemistry. We discuss these properties and we compare some of them with observations in the Galaxy and Large Magellanic Cloud. We compute the DPR of carbon stars in the SMC, finding that the estimates provided by our method can be significantly different, between a factor of ≈2-5, than the ones available in the literature. Our grids of models, including the spectra and other relevant dust and stellar quantities, are publicly available at http://starkey.astro.unipd.it/web/guest/dustymodels.

  16. Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters.

    Directory of Open Access Journals (Sweden)

    Kwang-Ming Liu

    Full Text Available The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ' and the vital parameters, maximum age (Tmax, age at maturity (Tm, annual fecundity (f/Rc, size at birth (Lb, size at maturity (Lm, and asymptotic length (L∞. Group (1 included species with slow growth rates (0.034 yr(-1 < k < 0.103 yr(-1 and extended longevity (26 yr < Tmax < 81 yr, e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2 included species with fast growth rates (0.103 yr(-1 < k < 0.358 yr(-1 and short longevity (9 yr < Tmax < 26 yr, e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3 included late maturing species (Lm/L∞ ≧ 0.75 with moderate longevity (Tmax < 29 yr, e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ' values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures.

  17. Demographic aspects of Chrysomya megacephala (Diptera, Calliphoridae adults maintained under experimental conditions: reproductive rate estimates

    Directory of Open Access Journals (Sweden)

    Marcelo Henrique de Carvalho

    2006-05-01

    Full Text Available The objective of this work was to evaluate some aspects of the populational ecology of Chrysomya megacephala, analyzing demographic aspects of adults kept under experimental conditions. Cages of C. megacephala adults were prepared with four different larval densities (100, 200, 400 and 800. For each cage, two tables were made: one with demographic parameters for the life expectancy estimate at the initial age (e0, and another with the reproductive rate and average reproduction age estimates. Populational parameters such as the intrinsic growth rate (r and the finite growth rate (lambda were calculated as well.Chrysomya megacephala (Fabricius (Diptera, Calliphoridae é uma espécie de mosca-varejeira de considerável importância médico-sanitária que foi introduzida acidentalmente no Brasil nos anos 70. O objetivo do presente trabalho foi avaliar alguns aspectos da ecologia populacional desta espécie, analisando aspectos demográficos de adultos mantidos sob condições experimentais. Gaiolas de C. megacephala foram montadas com quatro diferentes densidades larvais (100, 200, 400 e 800. Para cada gaiola, foram confeccionadas duas tabelas: uma com parâmetros demográficos para a estimativa da expectativa de vida na idade inicial (e0, e outra com as estimativas de taxa reprodutiva e idade média de reprodução. Parâmetros populacionais tais como a taxa intrínseca de crescimento (r e a taxa finita de crescimento (lambda foram também calculados.

  18. Using Pulse Rate in Estimating Workload Evaluating a Load Mobilizing Activity

    Directory of Open Access Journals (Sweden)

    Juan Alberto Castillo

    2014-06-01

    Full Text Available Introduction: The pulse rate is a direct indicator of the state of the cardiovascular system, in ad-dition to being an indirect indicator of the energy expended in performing a task. The pulse of a person is the number of pulses recorded in a peripheral artery per unit time; the pulse appears as a pressure wave moving along the blood vessels, which are flexible, “in large arterial branches, speed of 7-10 m/s in the small arteries, 15 to 35 m/s”. Materials and methods: The aim of this study was to assess heart rate, using the technique of recording the frequency of the pulse, oxy-gen consumption and observation of work activity in the estimation of the workload in a load handling task for three situations: lift/transfer/deposit; before, during and after the task the pulse rate is recorded for 24 young volunteers (10 women and 14 men under laboratory conditions. We performed a gesture analysis of work activity and lifting and handling strategies. Results: We observed an increase between initial and final fp in both groups and for the two tasks, a dif¬ference is also recorded in the increase in heart rate of 17.5 for charging 75 % of the participants experienced an increase in fp above 100 lat./min. Par 25 kg, registered values indicate greater than 114 lat./min and 17.5 kg than 128 lat./min values. Discussion: The pulse rate method is recommended for its simplicity of use for operational staff, supervisors and managers and indus¬trial engineers not trained in the physiology method can also be used by industrial hygienists.

  19. Population growth rates of reef sharks with and without fishing on the great barrier reef: robust estimation with multiple models.

    Directory of Open Access Journals (Sweden)

    Mizue Hisano

    Full Text Available Overfishing of sharks is a global concern, with increasing numbers of species threatened by overfishing. For many sharks, both catch rates and underwater visual surveys have been criticized as indices of abundance. In this context, estimation of population trends using individual demographic rates provides an important alternative means of assessing population status. However, such estimates involve uncertainties that must be appropriately characterized to credibly and effectively inform conservation efforts and management. Incorporating uncertainties into population assessment is especially important when key demographic rates are obtained via indirect methods, as is often the case for mortality rates of marine organisms subject to fishing. Here, focusing on two reef shark species on the Great Barrier Reef, Australia, we estimated natural and total mortality rates using several indirect methods, and determined the population growth rates resulting from each. We used bootstrapping to quantify the uncertainty associated with each estimate, and to evaluate the extent of agreement between estimates. Multiple models produced highly concordant natural and total mortality rates, and associated population growth rates, once the uncertainties associated with the individual estimates were taken into account. Consensus estimates of natural and total population growth across multiple models support the hypothesis that these species are declining rapidly due to fishing, in contrast to conclusions previously drawn from catch rate trends. Moreover, quantitative projections of abundance differences on fished versus unfished reefs, based on the population growth rate estimates, are comparable to those found in previous studies using underwater visual surveys. These findings appear to justify management actions to substantially reduce the fishing mortality of reef sharks. They also highlight the potential utility of rigorously characterizing uncertainty, and

  20. Fundamentals and the Equilibrium of Real Exchange Rate of an Emerging Economy: Estimating the Exchange Rate Misalignment in Malaysia

    National Research Council Canada - National Science Library

    Jauhari Dahalan; Mohammed Umar; Hussin Abdullah

    2016-01-01

      To evaluate the existence of possible over and under valuation of exchange rate for Malaysia, the study examines the nature of misalignment in the equilibrium real exchange rate and its systemic...

  1. Estimation of longitudinal force, lateral vehicle speed and yaw rate for four-wheel independent driven electric vehicles

    Science.gov (United States)

    Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong

    2018-02-01

    Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.

  2. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    Science.gov (United States)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  3. Estimating the basic reproduction rate of HFMD using the time series SIR model in Guangdong, China.

    Directory of Open Access Journals (Sweden)

    Zhicheng Du

    Full Text Available Hand, foot, and mouth disease (HFMD has caused a substantial burden of disease in China, especially in Guangdong Province. Based on notifiable cases, we use the time series Susceptible-Infected-Recovered model to estimate the basic reproduction rate (R0 and the herd immunity threshold, understanding the transmission and persistence of HFMD more completely for efficient intervention in this province. The standardized difference between the reported and fitted time series of HFMD was 0.009 (<0.2. The median basic reproduction rate of total, enterovirus 71, and coxsackievirus 16 cases in Guangdong were 4.621 (IQR: 3.907-5.823, 3.023 (IQR: 2.289-4.292 and 7.767 (IQR: 6.903-10.353, respectively. The heatmap of R0 showed semiannual peaks of activity, including a major peak in spring and early summer (about the 12th week followed by a smaller peak in autumn (about the 36th week. The county-level model showed that Longchuan (R0 = 33, Gaozhou (R0 = 24, Huazhou (R0 = 23 and Qingxin (R0 = 19 counties have higher basic reproduction rate than other counties in the province. The epidemic of HFMD in Guangdong Province is still grim, and strategies like the World Health Organization's expanded program on immunization need to be implemented. An elimination of HFMD in Guangdong might need a Herd Immunity Threshold of 78%.

  4. The estimation of absorbed dose rates for non-human biota: an extended intercomparison.

    Science.gov (United States)

    Vives i Batlle, J; Beaugelin-Seiller, K; Beresford, N A; Copplestone, D; Horyna, J; Hosseini, A; Johansen, M; Kamboj, S; Keum, D-K; Kurosawa, N; Newsome, L; Olyslaegers, G; Vandenhove, H; Ryufuku, S; Vives Lynch, S; Wood, M D; Yu, C

    2011-05-01

    An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of ±20%. The variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.

  5. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    Science.gov (United States)

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  6. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  7. Estimating Perturbation and Meta-Stability in the Daily Attendance Rates of Six Small High Schools

    Science.gov (United States)

    Koopmans, Matthijs

    This paper discusses the daily attendance rates in six small high schools over a ten-year period and evaluates how stable those rates are. “Stability” is approached from two vantage points: pulse models are fitted to estimate the impact of sudden perturbations and their reverberation through the series, and Autoregressive Fractionally Integrated Moving Average (ARFIMA) techniques are used to detect dependencies over the long range of the series. The analyses are meant to (1) exemplify the utility of time series approaches in educational research, which lacks a time series tradition, (2) discuss some time series features that seem to be particular to daily attendance rate trajectories such as the distinct downward pull coming from extreme observations, and (3) present an analytical approach to handle the important yet distinct patterns of variability that can be found in these data. The analysis also illustrates why the assumption of stability that underlies the habitual reporting of weekly, monthly and yearly averages in the educational literature is questionable, as it reveals dynamical processes (perturbation, meta-stability) that remain hidden in such summaries.

  8. Achievable Rates of Secure Transmission in Gaussian MISO Channel with Imperfect Main Channel Estimation

    KAUST Repository

    Zhou, Xinyu

    2016-03-15

    A Gaussian multiple-input single-output (MISO) fading channel is considered. We assume that the transmitter, in addition to the statistics of all channel gains, is aware instantaneously of a noisy version of the channel to the legitimate receiver. On the other hand, the legitimate receiver is aware instantaneously of its channel to the transmitter, whereas the eavesdropper instantaneously knows all channel gains. We evaluate an achievable rate using a Gaussian input without indexing an auxiliary random variable. A sufficient condition for beamforming to be optimal is provided. When the number of transmit antennas is large, beamforming also turns out to be optimal. In this case, the maximum achievable rate can be expressed in a simple closed form and scales with the logarithm of the number of transmit antennas. Furthermore, in the case when a noisy estimate of the eavesdropper’s channel is also available at the transmitter, we introduce the SNR difference and the SNR ratio criterions and derive the related optimal transmission strategies and the corresponding achievable rates.

  9. Simulation for estimation of hydrogen sulfide scavenger injection dose rate for treatment of crude oil

    Directory of Open Access Journals (Sweden)

    T.M. Elshiekh

    2015-12-01

    Full Text Available The presence of hydrogen sulfide in the hydrocarbon fluids is a well known problem in many oil and gas fields. Hydrogen sulfide is an undesirable contaminant which presents many environmental and safety hazards. It is corrosive, malodorous, and toxic. Accordingly, a need has been long left in the industry to develop a process which can successfully remove hydrogen sulfide from the hydrocarbons or at least reduce its level during the production, storage or processing to a level that satisfies safety and product specification requirements. The common method used to remove or reduce the concentration of hydrogen sulfide in the hydrocarbon production fluids is to inject the hydrogen sulfide scavenger into the hydrocarbon stream. One of the chemicals produced by the Egyptian Petroleum Research Institute (EPRI is EPRI H2S scavenger. It is used in some of the Egyptian petroleum producing companies. The injection dose rate of H2S scavenger is usually determined by experimental lab tests and field trials. In this work, this injection dose rate is mathematically estimated by modeling and simulation of an oil producing field belonging to Petrobel Company in Egypt which uses EPRI H2S scavenger. Comparison between the calculated and practical values of injection dose rate emphasizes the real ability of the proposed equation.

  10. The estimation of absorbed dose rates for non-human biota : an extended inter-comparison.

    Energy Technology Data Exchange (ETDEWEB)

    Batlle, J. V. I.; Beaugelin-Seiller, K.; Beresford, N. A.; Copplestone, D.; Horyna, J.; Hosseini, A.; Johansen, M.; Kamboj, S.; Keum, D.-K.; Kurosawa, N.; Newsome, L.; Olyslaegers, G.; Vandenhove, H.; Ryufuku, S.; Lynch, S. V.; Wood, M. D.; Yu, C. (Environmental Science Division); (Westlakes Scientific Consulting Ltd.); (Inst. de Radioprotection et de Surete Nucleaire); (Centre for Ecology & Hydrology); (Norwegian Radiation Protection Authority); (State Office for Nuclear Safety); (Korea Atomic Energy Research Institute); (Visible Information Centre Inc.); (Belgian Nuclear Research Centre); (University of Liverpool)

    2011-05-01

    An exercise to compare 10 approaches for the calculation of unweighted whole-body absorbed dose rates was conducted for 74 radionuclides and five of the ICRP's Reference Animals and Plants, or RAPs (duck, frog, flatfish egg, rat and elongated earthworm), selected for this exercise to cover a range of body sizes, dimensions and exposure scenarios. Results were analysed using a non-parametric method requiring no specific hypotheses about the statistical distribution of data. The obtained unweighted absorbed dose rates for internal exposure compare well between the different approaches, with 70% of the results falling within a range of variation of {+-}20%. The variation is greater for external exposure, although 90% of the estimates are within an order of magnitude of one another. There are some discernible patterns where specific models over- or under-predicted. These are explained based on the methodological differences including number of daughter products included in the calculation of dose rate for a parent nuclide; source-target geometry; databases for discrete energy and yield of radionuclides; rounding errors in integration algorithms; and intrinsic differences in calculation methods. For certain radionuclides, these factors combine to generate systematic variations between approaches. Overall, the technique chosen to interpret the data enabled methodological differences in dosimetry calculations to be quantified and compared, allowing the identification of common issues between different approaches and providing greater assurance on the fundamental dose conversion coefficient approaches used in available models for assessing radiological effects to biota.

  11. Spatially Explicit Estimates of Suspended Sediment and Bedload Transport Rates for Western Oregon and Northwestern California

    Science.gov (United States)

    O'Connor, J. E.; Wise, D. R.; Mangano, J.; Jones, K.

    2015-12-01

    Empirical analyses of suspended sediment and bedload transport gives estimates of sediment flux for western Oregon and northwestern California. The estimates of both bedload and suspended load are from regression models relating measured annual sediment yield to geologic, physiographic, and climatic properties of contributing basins. The best models include generalized geology and either slope or precipitation. The best-fit suspended-sediment model is based on basin geology, precipitation, and area of recent wildfire. It explains 65% of the variance for 68 suspended sediment measurement sites within the model area. Predicted suspended sediment yields range from no yield from the High Cascades geologic province to 200 tonnes/ km2-yr in the northern Oregon Coast Range and 1000 tonnes/km2-yr in recently burned areas of the northern Klamath terrain. Bed-material yield is similarly estimated from a regression model based on 22 sites of measured bed-material transport, mostly from reservoir accumulation analyses but also from several bedload measurement programs. The resulting best-fit regression is based on basin slope and the presence/absence of the Klamath geologic terrane. For the Klamath terrane, bed-material yield is twice that of the other geologic provinces. This model explains more than 80% of the variance of the better-quality measurements. Predicted bed-material yields range up to 350 tonnes/ km2-yr in steep areas of the Klamath terrane. Applying these regressions to small individual watersheds (mean size; 66 km2 for bed-material; 3 km2 for suspended sediment) and cumulating totals down the hydrologic network (but also decreasing the bed-material flux by experimentally determined attrition rates) gives spatially explicit estimates of both bed-material and suspended sediment flux. This enables assessment of several management issues, including the effects of dams on bedload transport, instream gravel mining, habitat formation processes, and water-quality. The

  12. Ceilometer-based Rainfall Rate estimates in the framework of VORTEX-SE campaign: A discussion

    Science.gov (United States)

    Barragan, Ruben; Rocadenbosch, Francesc; Waldinger, Joseph; Frasier, Stephen; Turner, Dave; Dawson, Daniel; Tanamachi, Robin

    2017-04-01

    During Spring 2016 the first season of the Verification of the Origins of Rotation in Tornadoes EXperiment-Southeast (VORTEX-SE) was conducted in the Huntsville, AL environs. Foci of VORTEX-SE include the characterization of the tornadic environments specific to the Southeast US as well as societal response to forecasts and warnings. Among several experiments, a research team from Purdue University and from the University of Massachusetts Amherst deployed a mobile S-band Frequency-Modulated Continuous-Wave (FMCW) radar and a co-located Vaisala CL31 ceilometer for a period of eight weeks near Belle Mina, AL. Portable disdrometers (DSDs) were also deployed in the same area by Purdue University, occasionally co-located with the radar and lidar. The NOAA National Severe Storms Laboratory also deployed the Collaborative Lower Atmosphere Mobile Profiling System (CLAMPS) consisting of a Doppler lidar, a microwave radiometer, and an infrared spectrometer. The purpose of these profiling instruments was to characterize the atmospheric boundary layer evolution over the course of the experiment. In this paper we focus on the lidar-based retrieval of rainfall rate (RR) and its limitations using observations from intensive observation periods during the experiment: 31 March and 29 April 2016. Departing from Lewandowski et al., 2009, the RR was estimated by the Vaisala CL31 ceilometer applying the slope method (Kunz and Leeuw, 1993) to invert the extinction caused by the rain. Extinction retrievals are fitted against RR estimates from the disdrometer in order to derive a correlation model that allows us to estimate the RR from the ceilometer in similar situations without a disdrometer permanently deployed. The problem of extinction retrieval is also studied from the perspective of Klett-Fernald-Sasano's (KFS) lidar inversion algorithm (Klett, 1981; 1985), which requires the assumption of an aerosol extinction-to-backscatter ratio (the so-called lidar ratio) and calibration in a

  13. Magnetopause reconnection rate estimates for Jupiter's magnetosphere based on interplanetary measurements at ~5AU

    Directory of Open Access Journals (Sweden)

    J. D. Nichols

    2006-03-01

    Full Text Available We make the first quantitative estimates of the magnetopause reconnection rate at Jupiter using extended in situ data sets, building on simple order of magnitude estimates made some thirty years ago by Brice and Ionannidis (1970 and Kennel and Coroniti (1975, 1977. The jovian low-latitude magnetopause (open flux production reconnection voltage is estimated using the Jackman et al. (2004 algorithm, validated at Earth, previously applied to Saturn, and here adapted to Jupiter. The high-latitude (lobe magnetopause reconnection voltage is similarly calculated using the related Gérard et al. (2005 algorithm, also previously used for Saturn. We employ data from the Ulysses spacecraft obtained during periods when it was located near 5AU and within 5° of the ecliptic plane (January to June 1992, January to August 1998, and April to October 2004, along with data from the Cassini spacecraft obtained during the Jupiter flyby in 2000/2001. We include the effect of magnetospheric compression through dynamic pressure modulation, and also examine the effect of variations in the direction of Jupiter's magnetic axis throughout the jovian day and year. The intervals of data considered represent different phases in the solar cycle, such that we are also able to examine solar cycle dependency. The overall average low-latitude reconnection voltage is estimated to be ~230 kV, such that the average amount of open flux created over one solar rotation is ~500 GWb. We thus estimate the average time to replenish Jupiter's magnetotail, which contains ~300-500 GWb of open flux, to be ~15-25 days, corresponding to a tail length of ~3.8-6.5 AU. The average high-latitude reconnection voltage is estimated to be ~130 kV, associated with lobe "stirring". Within these averages, however, the estimated voltages undergo considerable variation. Generally, the low-latitude reconnection voltage exhibits a "background" of ~100 kV that is punctuated by one or two significant

  14. NAIRU estimates in a transitional economy with an extremely high unemployment rate: The case of the Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Trpeski Predrag

    2015-01-01

    Full Text Available The main goal of the paper is to estimate the NAIRU for the Macedonian economy and to discuss the applicability of this indicator. The paper provides time-varying estimates for the period 1998-2012, which are obtained using the Ball and Mankiw (2002 approach, supplemented with the iterative procedure proposed by Ball (2009. The results reveal that the Macedonian NAIRU has ahumpshaped path. The estimation is based on both the LFS unemployment rate and the LFS unemployment rate corrected for employment in the grey economy. The dynamics of the estimated NAIRU stress the ability of the NAIRU to present the cyclical misbalances in a national economy.

  15. Variability of glomerular filtration rate estimation equations in elderly Chinese patients with chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Liu X

    2012-10-01

    Full Text Available Xun Liu,1,2,* Mu-hua Cheng,3,* Cheng-gang Shi,1 Cheng Wang,1 Cai-lian Cheng,1 Jin-xia Chen,1 Hua Tang,1 Zhu-jiang Chen,1 Zeng-chun Ye,1 Tan-qi Lou11Division of Nephrology, Department of Internal Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China; 2College of Biology Engineering, South China University of Technology, Guangzhou, China; 3Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China *These authors contributed equally to this paperBackground: Chronic kidney disease (CKD is recognized worldwide as a public health problem, and its prevalence increases as the population ages. However, the applicability of formulas for estimating the glomerular filtration rate (GFR based on serum creatinine (SC levels in elderly Chinese patients with CKD is limited.Materials and methods: Based on values obtained with the technetium-99m diethylenetriaminepentaacetic acid (99mTc-DTPA renal dynamic imaging method, 319 elderly Chinese patients with CKD were enrolled in this study. Serum creatinine was determined by the enzymatic method. The GFR was estimated using the Cockroft–Gault (CG equation, the Modification of Diet in Renal Disease (MDRD equations, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equation, the Jelliffe-1973 equation, and the Hull equation.Results: The median of difference ranged from −0.3–4.3 mL/min/1.73 m2. The interquartile range (IQR of differences ranged from 13.9–17.6 mL/min/1.73 m2. Accuracy with a deviation less than 15% ranged from 27.6%–32.9%. Accuracy with a deviation less than 30% ranged from 53.6%–57.7%. Accuracy with a deviation less than 50% ranged from 74.9%–81.5%. None of the equations had accuracy up to the 70% level with a deviation less than 30% from the standard glomerular filtration rate (sGFR. Bland–Altman analysis demonstrated that the mean difference ranged from −3.0–2.4 mL/min/1.73 m2. However, the

  16. Diabatic heating rate estimates from European Centre for Medium-Range Weather Forecasts analyses

    Science.gov (United States)

    Christy, John R.

    1991-01-01

    Vertically integrated diabatic heating rate estimates (H) calculated from 32 months of European Center for Medium-Range Weather Forecasts daily analyses (May 1985-December 1987) are determined as residuals of the thermodynamic equation in pressure coordinates. Values for global, hemispheric, zonal, and grid point H are given as they vary over the time period examined. The distribution of H is compared with previous results and with outgoing longwave radiation (OLR) measurements. The most significant negative correlations between H and OLR occur for (1) tropical and Northern-Hemisphere mid-latitude oceanic areas and (2) zonal and hemispheric mean values for periods less than 90 days. Largest positive correlations are seen in periods greater than 90 days for the Northern Hemispheric mean and continental areas of North Africa, North America, northern Asia, and Antarctica. The physical basis for these relationships is discussed. An interyear comparison between 1986 and 1987 reveals the ENSO signal.

  17. In-Vessel Coil Material Failure Rate Estimates for ITER Design Use

    Energy Technology Data Exchange (ETDEWEB)

    L. C. Cadwallader

    2013-01-01

    The ITER international project design teams are working to produce an engineering design for construction of this large tokamak fusion experiment. One of the design issues is ensuring proper control of the fusion plasma. In-vessel magnet coils may be needed for plasma control, especially the control of edge localized modes (ELMs) and plasma vertical stabilization (VS). These coils will be lifetime components that reside inside the ITER vacuum vessel behind the blanket modules. As such, their reliability is an important design issue since access will be time consuming if any type of repair were necessary. The following chapters give the research results and estimates of failure rates for the coil conductor and jacket materials to be used for the in-vessel coils. Copper and CuCrZr conductors, and stainless steel and Inconel jackets are examined.

  18. Single event upset rate estimates for a 16-K CMOS SRAM

    Science.gov (United States)

    Browning, J. S.; Koga, R.; Kolasinski, W. A.

    1985-12-01

    A radiation-hardened 16-K CMOS SRAM has been developed for satellite and deep space applications. The RAM memory cell was modeled to predict the critical charge, necessary for single-particle upset, as a function of temperature, total dose, and hardening feedback resistance. Laboratory measurements of the single event cross section and effective funnel length were made using the Lawrence Berkeley Laboratory's 88-inch cyclotron to generate high energy krypton ions. The combination of modeled and measured parameters permitted estimation of the upset rate for the ramcell, and the mean-time-to-failure for a 512-K word, 22-bit memory system employing error detection and correction circuits while functioning in the Adam's '90 percent worst case' cosmic ray environment. This paper is presented in the form of a tutorial review, summarizing the results of substantial research efforts within the single event community.

  19. Variability in Intrahousehold Transmission of Ebola Virus, and Estimation of the Household Secondary Attack Rate.

    Science.gov (United States)

    Glynn, Judith R; Bower, Hilary; Johnson, Sembia; Turay, Cecilia; Sesay, Daniel; Mansaray, Saidu H; Kamara, Osman; Kamara, Alie Joshua; Bangura, Mohammed S; Checchi, Francesco

    2018-01-04

    Transmission between family members accounts for most Ebola virus transmission, but little is known about determinants of intrahousehold spread. From detailed exposure histories, intrahousehold transmission chains were created for 94 households of Ebola survivors in Sierra Leone: 109 (co-)primary cases gave rise to 317 subsequent cases (0-100% of those exposed). Larger households were more likely to have subsequent cases, and the proportion of household members affected depended on individual and household-level factors. More transmissions occurred from older than from younger cases, and from those with more severe disease. The estimated household secondary attack rate was 18%. © The Author(s) 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  20. Real-time estimation and biofeedback of single-neuron firing rates using local field potentials.

    Science.gov (United States)

    Hall, Thomas M; Nazarpour, Kianoush; Jackson, Andrew

    2014-11-14

    The long-term stability and low-frequency composition of local field potentials (LFPs) offer important advantages for robust and efficient neuroprostheses. However, cortical LFPs recorded by multi-electrode arrays are often assumed to contain only redundant information arising from the activity of large neuronal populations. Here we show that multichannel LFPs in monkey motor cortex each contain a slightly different mixture of distinctive slow potentials that accompany neuronal firing. As a result, the firing rates of individual neurons can be estimated with surprising accuracy. We implemented this method in a real-time biofeedback brain-machine interface, and found that monkeys could learn to modulate the activity of arbitrary neurons using feedback derived solely from LFPs. These findings provide a principled method for monitoring individual neurons without long-term recording of action potentials.

  1. Estimating the rates of deaths by suicide among adults who attempt suicide in the United States.

    Science.gov (United States)

    Han, Beth; Kott, Phillip S; Hughes, Art; McKeon, Richard; Blanco, Carlos; Compton, Wilson M

    2016-06-01

    In 2012, over 1.3 million U.S. adults reported that they attempted suicide in the past year, and 39,426 adults died by suicide. This study estimated national suicide case fatality rates among adult suicide attempters (fatal and nonfatal cases) and examined how they varied by sociodemographic characteristics. We pooled data on deaths by suicide (n = 147,427, fatal cases in the U.S.) from the 2008-2011 U S. mortality files and data on suicide attempters who survived (n = 2000 nonfatal cases) from the 2008-2012 National Surveys on Drug Use and Health. Descriptive analyses and multivariable logistic regression models were applied. Among adult suicide attempters in the U.S., the overall 12-month suicide case fatality rate was 3.2% (95% confidence interval (CI) = 2.9%-3.5%). It varied significantly by sociodemographic factors. For those aged 45 or older, the adjusted suicide case fatality rate was higher among men (7.6%) than among women (2.6%) (suicide case fatality rate ratio (SCFRR) = 3.0, 95% CI = 1.83-4.79), was higher among non-Hispanic whites (7.9%) than among non-white minorities (0.8-2.5%) (SCFRRs = 3.2-9.9), and was higher among those with less than high school education (16.0%) than among college graduates (1.8%) (SCFRR = 8.8, 95% CI = 3.83-20.16). Across male and female attempters, being aged 45 or older and non-Hispanic white and having less than secondary school were at a higher risk for death by suicide. Focusing on these demographic characteristics can help identify suicide attempters at higher risk for death by suicide, inform clinical assessments, and improve suicide prevention and intervention efforts by increasing high-risk suicide attempters' access to mental health treatment. Published by Elsevier Ltd.

  2. A Model for Estimation of Rain Rate on Tropical Land from TRMM Microwave Imager Radiometer Observations

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Kim, Kyu-Myong

    2004-01-01

    Over the tropical land regions observations of the 85 GHz brightness temperature (T(sub 85v)) made by the TRMM Microwave Imager (TMI) radiometer when analyzed with the help of rain rate (R(sub pR)) deduced from the TRMM Precipitation Radar (PR) indicate that there are two maxima in rain rate. One strong maximum occurs when T(sub 85) has a value of about 220 K and the other weaker one when T(sub 85v) is much colder approx. 150 K. Together with the help of earlier studies based on airborne Doppler Radar observations and radiative transfer theoretical simulations, we infer the maximum near 220 K is a result of relatively weak scattering due to super cooled rain drops and water coated ice hydrometeors associated with a developing thunderstorm (Cb) that has a strong updraft. The other maximum is associated with strong scattering due to ice particles that are formed when the updraft collapses and the rain from the Cb is transit2oning from convective type to stratiform type. Incorporating these ideas and with a view to improve the estimation of rain rate from existing operational method applicable to the tropical land areas, we have developed a rain retrieval model. This model utilizes two parameters, that have a horizontal scale of approx. 20km, deduced from the TMI measurements at 19, 21 and 37 GHz (T(sub 19v), T(sub 21v), T(sub 37v). The third parameter in the model, namely the horizontal gradient of brightness temperature within the 20 km scale, is deduced from TMI measurements at 85 GHz. Utilizing these parameters our retrieval model is formulated to yield instantaneous rain rate on a scale of 20 km and seasonal average on a mesoscale that agree well with that of the PR.

  3. Analysis of HIV early infant diagnosis data to estimate rates of perinatal HIV transmission in Zambia.

    Science.gov (United States)

    Torpey, Kwasi; Mandala, Justin; Kasonde, Prisca; Bryan-Mofya, Gail; Bweupe, Maximillian; Mukundu, Jonathan; Zimba, Chilunje; Mwale, Catherine; Lumano, Hilary; Welsh, Michael

    2012-01-01

    Mother-to-child transmission of HIV (MTCT) remains the most prevalent source of pediatric HIV infection. Most PMTCT (prevention of mother-to-child transmission of HIV) programs have concentrated monitoring and evaluation efforts on process rather than on outcome indicators. In this paper, we review service data from 28,320 children born to HIV-positive mothers to estimate MTCT rates. This study analyzed DNA PCR results and PMTCT data from perinatally exposed children zero to 12 months of age from five Zambian provinces between September 2007 and July 2010. The majority of children (58.6%) had a PCR test conducted between age six weeks and six months. Exclusive breastfeeding (56.8%) was the most frequent feeding method. An estimated 45.9% of mothers were below 30 years old and 93.3% had disclosed their HIV status. In terms of ARV regimen for PMTCT, 32.7% received AZT+single dose NVP (sdNVP), 30.9% received highly active antiretroviral treatment (HAART), 19.6% received sdNVP only and 12.9% received no ARVs. Transmission rates at six weeks when ARVs were received by both mother and baby, mother only, baby only, and none were 5.8%, 10.5%, 15.8% and 21.8% respectively. Transmission rates at six weeks where mother received HAART, AZT+sd NVP, sdNVP, and no intervention were 4.2%, 6.8%, 8.7% and 20.1% respectively. Based on adjusted analysis including ARV exposures and non ARV-related parameters, lower rates of positive PCR results were associated with 1) both mother and infant receiving prophylaxis, 2) children never breastfed and 3) mother being 30 years old or greater. Overall between September 2007 and July 2010, 12.2% of PCR results were HIV positive. Between September 2007 and January 2009, then between February 2009 and July 2010, proportions of positive PCR results were 15.1% and 11% respectively, a significant difference. The use of ARV drugs reduces vertical transmission of HIV in a program setting. Non-chemoprophylactic factors also play a significant role in

  4. Analysis of HIV early infant diagnosis data to estimate rates of perinatal HIV transmission in Zambia.

    Directory of Open Access Journals (Sweden)

    Kwasi Torpey

    Full Text Available Mother-to-child transmission of HIV (MTCT remains the most prevalent source of pediatric HIV infection. Most PMTCT (prevention of mother-to-child transmission of HIV programs have concentrated monitoring and evaluation efforts on process rather than on outcome indicators. In this paper, we review service data from 28,320 children born to HIV-positive mothers to estimate MTCT rates.This study analyzed DNA PCR results and PMTCT data from perinatally exposed children zero to 12 months of age from five Zambian provinces between September 2007 and July 2010.The majority of children (58.6% had a PCR test conducted between age six weeks and six months. Exclusive breastfeeding (56.8% was the most frequent feeding method. An estimated 45.9% of mothers were below 30 years old and 93.3% had disclosed their HIV status. In terms of ARV regimen for PMTCT, 32.7% received AZT+single dose NVP (sdNVP, 30.9% received highly active antiretroviral treatment (HAART, 19.6% received sdNVP only and 12.9% received no ARVs. Transmission rates at six weeks when ARVs were received by both mother and baby, mother only, baby only, and none were 5.8%, 10.5%, 15.8% and 21.8% respectively. Transmission rates at six weeks where mother received HAART, AZT+sd NVP, sdNVP, and no intervention were 4.2%, 6.8%, 8.7% and 20.1% respectively. Based on adjusted analysis including ARV exposures and non ARV-related parameters, lower rates of positive PCR results were associated with 1 both mother and infant receiving prophylaxis, 2 children never breastfed and 3 mother being 30 years old or greater. Overall between September 2007 and July 2010, 12.2% of PCR results were HIV positive. Between September 2007 and January 2009, then between February 2009 and July 2010, proportions of positive PCR results were 15.1% and 11% respectively, a significant difference.The use of ARV drugs reduces vertical transmission of HIV in a program setting. Non-chemoprophylactic factors also play a significant

  5. Cancer incidence rates in Turkey in 2006: a detailed registry based estimation.

    Science.gov (United States)

    Eser, Sultan; Yakut, Cankut; Özdemir, Raziye; Karakilinç, Hülya; Özalan, Saniye; Marshall, Sarah F; Karaoğlanoğlu, Okan; Anbarcioğlu, Zehra; Üçüncü, Nurşen; Akin, Ümit; Özen, Emire; Özgül, Nejat; Anton-Culver, Hoda; Tuncer, Murat

    2010-01-01

    The purpose of this study is to provide a detailed report on cancer incidence in Turkey, a relatively large country with a population of 72 million. We present the estimates of the cancer burden in Turkey for 2006, calculated using data from the eight population based cancer registries which have been set up in selected provinces representative of sociodemographic patterns in their regions. We calculated age specific and age adjusted incidence rates (AAIR-world standard population) for each of registries separately. We assigned a weighting coefficient for each registry proportional to the population size of the region which the registry represents. We pooled a total of 24,428 cancers (14,581 males, 9,847 females). AAIRs per 100 000 were: 210.1 in men and 129.4 in women for all cancer sites excluding non-melanoma skin cancer. The AAIR per 100 000 men was highest for lung cancer (60.3) followed by prostate (22.8), bladder (19.6), stomach (16.3) and colo-rectal (15.4) cancers. Among women the rate per 100 000 was highest for breast cancer (33.7) followed by colorectal (11.5), stomach (8.8), thyroid (8.8) and lung (7.7). The most striking findings about the cancer incidence in the provinces were the high incidence rates for stomach and esophageal cancers in Erzurum and high stomach cancer incidence rates in Trabzon for both sexes. We are thus able to present the most accurate and realistic estimations for cancer incidence in Turkey so far. Lung, prostate, bladder, stomach, colorectal, larynx cancers in men and breast, colorectal, stomach, thyroid, lung, corpus uteri cancers in women are the leading cancers respectively. This figure shows us tobacco related cancers, lung, bladder and larynx, predominate in men. Concurrently, we analyzed the data for each province separately, giving us the opportunity to present the differences in cancer patterns among provinces. The high incidences of stomach and esophageal cancers in East and high incidence of stomach cancer in

  6. Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers

    Directory of Open Access Journals (Sweden)

    P. Maleki

    2016-02-01

    in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the

  7. Development of a beta-trace protein based formula for estimation of glomerular filtration rate.

    Science.gov (United States)

    Benlamri, Amina; Nadarajah, Renisha; Yasin, Abeer; Lepage, Nathalie; Sharma, Ajay P; Filler, Guido

    2010-03-01

    Beta-trace protein (BTP) is a novel marker of glomerular filtration rate (GFR). To date, no pediatric formula for calculating GFR based on BTP has been developed. We measured GFR, serum creatinine and BTP in 387 children who underwent 474 (99m)Tc-diethylene triamine pentaacetic acid renal scans. A BTP-based formula for estimating GFR was derived using stepwise linear regression analysis. A separate control group of 116 measurements in 99 children was used to validate the novel formula. A formula was also developed for each gender. The novel formula is: [formula: see text]. The Spearman rank correlation coefficient between the BTP-derived GFR estimate and the measured GFR was 0.80 [95% confidence interval (CI) 0.76-0.83], which is substantially better than that derived with the Schwartz formula (r = 0.70, 95% CI 0.65-0.74). The Bland-Altman analysis revealed a mean bias of 1.21% [standard deviation (SD) 28%] in the formula development dataset, which was virtually identical to the 1.03% mean bias (29.5% SD) in the validation group and no different from the Schwartz formula bias. The percentage of values within 10% (33.0 vs. 28.3%) and 30% deviation (76.8 vs. 72.6%) were better for BTP-based formula than for the Schwartz formula. Separate formulas according to gender did not perform better than that for the pediatric population. This BTP-based formula was found to estimate GFR with reasonable precision and provided improved accuracy over the Schwartz GFR formula.

  8. Performance of formulae based estimates of glomerular filtration rate for carboplatin dosing in stage 1 seminoma.

    Science.gov (United States)

    Shepherd, Scott T C; Gillen, Gerry; Morrison, Paula; Forte, Carla; Macpherson, Iain R; White, Jeff D; Mark, Patrick B

    2014-03-01

    Single cycle carboplatin, dosed by glomerular filtration rate (GFR), is standard adjuvant therapy for stage 1 seminoma. Accurate measurement of GFR is essential for correct dosing. Isotopic methods remain the gold standard for the determination of GFR. Formulae to estimate GFR have improved the assessment of renal function in non-oncological settings. We assessed the utility of these formulae for carboplatin dosing. We studied consecutive subjects receiving adjuvant carboplatin for stage 1 seminoma at our institution between 2007 and 2012. Subjects underwent 51Cr-ethylene diamine tetra-acetic acid (EDTA) measurement of GFR with carboplatin dose calculated using the Calvert formula. Theoretical carboplatin doses were calculated from estimated GFR using Chronic Kidney Disease-Epidemiology (CKD-EPI), Management of Diet in Renal Disease (MDRD) and Cockcroft-Gault (CG) formulae with additional correction for actual body surface area (BSA). Carboplatin doses calculated by formulae were compared with dose calculated by isotopic GFR; a difference formula had greatest accuracy. The CKD-EPI formula, corrected for actual BSA, performed best; 45.9% of patients received within 10% of correct carboplatin dose. Patients predicted as underdosed (13.5%) by CKD-EPI were more likely to be obese (p=0.013); there were no predictors of the 40.5% receiving an excess dose. Our data support further evaluation of the CKD-EPI formula in this patient population but clinically significant variances in carboplatin dosing occur using non-isotopic methods of GFR estimation. Isotopic determination of GFR should remain the recommended standard for carboplatin dosing when accuracy is essential. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    Science.gov (United States)

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m(2) in the developmental and 53.4 ml/min/1.73 m(2) in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m(2)) than in the new regression model (IQR = 14.0 ml/min/1.73 m(2), P ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m(2), 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  10. Estimation of methane emission rate changes using age-defined waste in a landfill site.

    Science.gov (United States)

    Ishii, Kazuei; Furuichi, Toru

    2013-09-01

    Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Using r-process enhanced galaxies to estimate the neutron star merger rate at high redshift

    Science.gov (United States)

    Roederer, Ian

    2018-01-01

    The rapid neutron-capture process, or r-process, is one of the fundamental ways that stars produce heavy elements. I describe a new approach that uses the existence of r-process enhanced galaxies, like the recently discovered ultra-faint dwarf galaxy Reticulum II, to derive a rate for neutron star mergers at high redshift. This method relies on three assertions. First, several lines of reasoning point to neutron star mergers as a rare yet prolific producer of r-process elements, and one merger event is capable of enriching most of the stars in a low-mass dwarf galaxy. Second, the Local Group is cosmologically representative of the halo mass function at the mass scales of low-luminosity dwarf galaxies, and the volume that their progenitors spanned at high redshifts can be estimated from simulations. Third, many of these dwarf galaxies are extremely old, and the metals found in their stars today date from the earliest times at high redshift. These galaxies occupy a quantifiable volume of the Universe, from which the frequency of r-process enhanced galaxies can be estimated. This frequency may be interpreted as lower limit to the neutron star merger rate at a redshift (z ~ 5-10) that is much higher than is accessible to gravitational wave observatories. I will present a proof of concept demonstration using medium-resolution multi-object spectroscopy from the Michigan/Magellan Fiber System (M2FS) to recover the known r-process galaxy Reticulum II, and I will discuss future plans to apply this method to other Local Group dwarf galaxies.

  12. A single sample method for estimating glomerular filtration rate in cats.

    Science.gov (United States)

    Finch, N C; Heiene, R; Elliott, J; Syme, H M; Peters, A M

    2013-01-01

    Validated methods of estimating glomerular filtration rate (GFR) in cats requiring only a limited number of samples are desirable. To test a single sample method of determining GFR in cats. The validation population (group 1) consisted of 89 client-owned cats (73 nonazotemic and 16 azotemic). A separate population of 18 healthy nonazotemic cats (group 2) was used to test the methods. Glomerular filtration rate was determined in group 1 using corrected slope-intercept iohexol clearance. Single sample clearance was determined using the Jacobsson and modified Jacobsson methods and validated against slope-intercept clearance. Extracellular fluid volume (ECFV) was determined from slope-intercept clearance with correction for the 1 compartment assumption and by deriving a prediction formula for ECFV (ECFV Predicted ) based on the body weight. The optimal single sample method was tested in group 2. A blood sample at 180 minutes and ECFV Predicted were optimal for single sample clearance. Mean ± SD GFR in group 1 determined using the Jacobsson and modified Jacobsson formulae was 1.78 ± 0.70 and 1.65 ± 0.60 mL/min/kg, respectively. When tested in group 2, the Jacobsson method overestimated multisample clearance. The modified Jacobsson method (mean ± SD 2.22 ± 0.34 mL/min/kg) was in agreement with multisample clearance (mean ± SD 2.19 ± 0.34 mL/min/kg). The modified Jacobsson method provides accurate estimation of iohexol clearance in cats, from a single sample collected at 180 minutes postinjection and using a formula based on the body weight to predict ECFV. Further validation of the formula in patients with very high or very low GFR is required. Copyright © 2013 by the American College of Veterinary Internal Medicine.

  13. Multisensor data fusion for enhanced respiratory rate estimation in thermal videos.

    Science.gov (United States)

    Pereira, Carina B; Xinchi Yu; Blazek, Vladimir; Venema, Boudewijn; Leonhardt, Steffen

    2016-08-01

    Scientific studies have demonstrated that an atypical respiratory rate (RR) is frequently one of the earliest and major indicators of physiological distress. However, it is also described in the literature as "the neglected vital parameter", mainly due to shortcomings of clinical available monitoring techniques, which require attachment of sensors to the patient's body. The current paper introduces a novel approach that uses multisensor data fusion for an enhanced RR estimation in thermal videos. It considers not only the temperature variation around nostrils and mouth, but the upward and downward movement of both shoulders. In order to analyze the performance of our approach, two experiments were carried out on five healthy candidates. While during phase A, the subjects breathed normally, during phase B they simulated different breathing patterns. Thoracic effort was the gold standard elected to validate our algorithm. Our results show an excellent agreement between infrared thermography (IRT) and ground truth. While in phase A a mean correlation of 0.983 and a root-mean-square error of 0.240 bpm (breaths per minute) was obtained, in phase B they hovered around 0.995 and 0.890 bpm, respectively. In sum, IRT may be a promising clinical alternative to conventional sensors. Additionally, multisensor data fusion contributes to an enhancement of RR estimation and robustness.

  14. Time to change the glomerular filtration rate estimating formula in primary care?

    Science.gov (United States)

    Korhonen, Päivi E; Kautiainen, Hannu; Järvenpää, Salme; Kivelä, Sirkka-Liisa

    2012-06-01

    The most commonly used equation for estimated glomerular filtration rate (eGFR) is nowadays the four-variable Modification of Diet in Renal Disease (MDRD) equation. This formula was derived from patients with non-diabetic chronic kidney disease (CKD) with mean GFR 40 ml/min. We compared the MDRD study equation and the recently developed Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation by applying the two formulas in 1747 middle-aged cardiovascular risk persons in primary care. The prevalence of renal insufficiency defined as eGFRformula, and 3.6% (95% CI 2.8-4.6) according to the CKD-EPI formula. The subjects who were classified as having CKD according to the MDRD equation, but no-CKD according to the CKD-EPI formula, were mostly women (86%) and slightly younger than the subjects having CKD according to both formulas. The characteristics of the subjects commonly treated in primary care resemble more closely the population from which the CKD-EPI than the MDRD study equation was derived from. Thus, we suppose that in general practice, the CKD-EPI equation is more suitable for estimating renal function than the MDRD equation. Copyright © 2012 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  15. Estimated glomerular filtration rate and albuminuria in Korean population evaluated for cardiovascular risk.

    Science.gov (United States)

    Lee, Kayoung; Kim, Jinseung

    2016-05-01

    This study's purpose was to examine established cardiovascular risk prediction model scores for their associations with albuminuria and estimated glomerular filtration rate (eGFR) in Korean population. We calculated the 10-year atherosclerotic cardiovascular disease (ASCVD) risk estimated score, Korean coronary heart disease risk prediction score (KRS), and the Adult Treatment Panel (ATP) III risk score for 9733 South Koreans, aged 40-79 years, who were not diagnosed with stroke, angina pectoris, or myocardial ischemia using data from the 2011-2013 Korea National Health and Nutrition Examination Survey. The associations between cardiovascular risk model scores and the urine albumin-to-creatinine ratio (UACR) and eGFR tended to be stronger for the ASCVD risk score than for the other risk scores. The area under the receiver operating characteristic curve for increased albuminuria (UACR ≥ 30 mg/g) and decreased eGFR (albuminuria in women). The ASCVD risk score had a stronger relationship with and better predicted albuminuria and eGFR than did the KRS and ATP III risk score.

  16. Simple estimate of entrainment rate of pollutants from a coastal discharge into the surf zone.

    Science.gov (United States)

    Wong, Simon H C; Monismith, Stephen G; Boehm, Alexandria B

    2013-10-15

    Microbial pollutants from coastal discharges can increase illness risks for swimmers and cause beach advisories. There is presently no predictive model for estimating the entrainment of pollution from coastal discharges into the surf zone. We present a novel, quantitative framework for estimating surf zone entrainment of pollution at a wave-dominant open beach. Using physical arguments, we identify a dimensionless parameter equal to the quotient of the surf zone width l(sz) and the cross-flow length scale of the discharge la = M(j) (1/2)/U(sz), where M(j) is the discharge's momentum flux and U(sz) is a representative alongshore velocity in the surf zone. We conducted numerical modeling of a nonbuoyant discharge at an alongshore uniform beach with constant slope using a wave-resolving hydrodynamic model. Using results from 144 numerical experiments we develop an empirical relationship between the surf zone entrainment rate α and l(sz)/(la). The empirical relationship can reasonably explain seven measurements of surf zone entrainment at three diverse coastal discharges. This predictive relationship can be a useful tool in coastal water quality management and can be used to develop predictive beach water quality models.

  17. Estimating methane production rates in bogs and landfills by deuterium enrichment of pore water

    Science.gov (United States)

    Siegel, D.I.; Chanton, J.P.; Glaser, P.H.; Chasar, L.S.; Rosenberry, D.O.

    2001-01-01

    Raised bogs and municipal waste landfills harbor large populations of methanogens within their domed deposits of anoxic organic matter. Although the methane emissions from these sites have been estimated by various methods, limited data exist on the activity of the methanogens at depth. We therefore analyzed the stable isotopic signature of the pore waters in two raised bogs from northern Minnesota to identify depth intervals in the peat profile where methanogenic metabolism occurs. Methanogenesis enriched the deuterium (2H) content of the deep peat pore waters by as much as +11% (Vienna Standard Mean Sea Water), which compares to a much greater enrichment factor of +70% in leachate from New York City's Fresh Kills landfill. The bog pore waters were isotopically dated by tritium (3H) to be about 35 years old at 1.5 m depth, whereas the landfill leachate was estimated as ~ 17 years old from Darcy flow calculations. According to an isotopic mass balance the observed deuterium enrichment indicates that about 1.2 g of CH4m-3 d-1 were produced within the deeper peat, compared to about 2.8 g CH4 m-3 d-1 in the landfill. The values for methane production in the bog peat are substantially higher than the flux rates measured at the surface of the bogs or at the landfill, indicating that deeper methane production may be much higher than was previously assumed.

  18. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  19. Using a Single Blood Sample and Inulin to Estimate Glomerular Filtration Rate in Rabbits

    Science.gov (United States)

    Michigoshi, Yuuki; Yamagishi, Norio; Satoh, Hiroshi; Kato, Masaki; Furuhama, Kazuhisa

    2011-01-01

    To establish a simple procedure for estimating the glomerular filtration rate (GFR) in conscious rabbits, we used the conventional multisample approach to develop a single-blood-sample method. A bolus injection of inulin was administered intravenously at a dose of 40 mg/kg to male New Zealand White rabbits, and blood was collected 30, 60, 90, and 120 min later. Serum inulin, urea nitrogen, and creatinine concentrations were determined. Using this multi-sample method, the reference GFR in clinically healthy rabbits was 4.01 ± 0.17 mL/min/kg (n = 17). In rabbits given an intravenous injection of the antitumor agent cisplatin, GFR fell before serum urea nitrogen and creatinine concentrations increased. Based on cumulative GFR data from healthy and nephropathy rabbits, the GFR obtained from the 3-sample method (30-, 60-, and 90-min samples) was closely correlated (r = 0.99) with that calculated from the estimated distribution volume and serum inulin concentration at 90 min after inulin injection in the single-blood-sample method. These results demonstrate that the single-blood-sample method supports sequential GFR measurements in rabbits and is a versatile procedure not only for research purposes but also in clinical settings. PMID:22330718

  20. Fluid Simulation of relativistic electron beam driven wakefield in the blowout regime

    Science.gov (United States)

    Bera, Ratan Kumar; Das, Amita; Sengupta, Sudip

    2017-10-01

    Two-dimensional Fluid simulations are employed to study the Wakefield driven by an electron beam in a plasma medium. The 1-D results are recovered when the transverse extent of the beam is chosen to be much longer than its longitudinal extent. Furthermore, it is shown that the blowout structure matches closely with the PIC observations, before the phase mixing. A close comparison of the fluid observations with the analytical modeling made by Lu et al. to PIC observations, have been provided. It is thus interesting to note that a simplified fluid simulation adequately represents the form of the wake potential obtained by sophisticated PIC studies. We also address issues related to particle acceleration in such a potential structure by studying the evolution of injected test particles. A maximum energy gain of 2.8 GeV by the electrons from the back of the driver beam of energy 28.5 GeV in a 10 cm long plasma is shown to be achieved. This is in conformity with the experimental result of Ref.. We observe that maximum energy gain can get doubled to 5.7 GeV when the bunch of test particles was placed near the axial edge of the first blowout structure. Inst. for Plasma Research.

  1. On a Solar Blowout Jet: Driving Mechanism and the Formation of Cool and Hot Components

    Science.gov (United States)

    Shen, Yuandeng; Liu, Ying D.; Su, Jiangtao; Qu, Zhining; Tian, Zhanjun

    2017-12-01

    We present observations of a blowout jet that experienced two distinct ejection stages. The first stage started from the emergence of a small positive magnetic polarity, which was cancelled by the nearby negative magnetic field and caused the rising of a mini-filament and its confining loops. This further resulted in a small jet due to the magnetic reconnection between the rising confining loops and the overlying open field. The second ejection stage was mainly due to successive removal of the confining field by reconnection: the filament erupted, and the erupting cool filament material directly combined with the hot jet that originated form the reconnection region and therefore formed the cool and hot components of the blowout jet. During the two ejection stages, cool Hα jets are also observed cospatial with their coronal counterparts, but their appearance times are earlier by a few minutes than those of the hot coronal jets. The hot coronal jets are therefor possibly caused by the heating of the cool Hα jets or the rising of the reconnection height from the chromosphere to the corona. The scenario that magnetic reconnection occurred between the confining loops and the overlying open loops is supported by many observational facts, including the bright patches on both sides of the mini-filament, hot plasma blobs along the jet body, and periodic metric radio type III bursts at the very beginnings of the two stages. The evolution and characteristics of these features show the detailed nonlinear process in magnetic reconnection.

  2. Blowout Surge due to Interaction between a Solar Filament and Coronal Loops

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haidong; Jiang, Yunchun; Yang, Jiayan; Yang, Bo; Xu, Zhe; Bi, Yi; Hong, Junchao; Chen, Hechao [Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming, 650216 (China); Qu, Zhining, E-mail: lhd@ynao.ac.cn [Department of Physics, School of Science, Sichuan University of Science and Engineering, Zigong 643000 (China)

    2017-06-20

    We present an observation of the interaction between a filament and the outer spine-like loops that produces a blowout surge within one footpoint of large-scale coronal loops on 2015 February 6. Based the observation of the AIA 304 and 94 Å, the activated filament is initially embedded below a dome of a fan-spine configuration. Due to the ascending motion, the erupting filament reconnects with the outer spine-like field. We note that the material in the filament blows out along the outer spine-like field to form the surge with a wider spire, and a two-ribbon flare appears at the site of the filament eruption. In this process, small bright blobs appear at the interaction region and stream up along the outer spine-like field and down along the eastern fan-like field. As a result, a leg of the filament becomes radial and the material in it erupts, while another leg forms the new closed loops. Our results confirm that the successive reconnection occurring between the erupting filament and the coronal loops may lead to a strong thermal/magnetic pressure imbalance, resulting in a blowout surge.

  3. Methodology for earthquake rupture rate estimates of fault networks: example for the western Corinth rift, Greece

    Science.gov (United States)

    Chartier, Thomas; Scotti, Oona; Lyon-Caen, Hélène; Boiselet, Aurélien

    2017-10-01

    Modeling the seismic potential of active faults is a fundamental step of probabilistic seismic hazard assessment (PSHA). An accurate estimation of the rate of earthquakes on the faults is necessary in order to obtain the probability of exceedance of a given ground motion. Most PSHA studies consider faults as independent structures and neglect the possibility of multiple faults or fault segments rupturing simultaneously (fault-to-fault, FtF, ruptures). The Uniform California Earthquake Rupture Forecast version 3 (UCERF-3) model takes into account this possibility by considering a system-level approach rather than an individual-fault-level approach using the geological, seismological and geodetical information to invert the earthquake rates. In many places of the world seismological and geodetical information along fault networks is often not well constrained. There is therefore a need to propose a methodology relying on geological information alone to compute earthquake rates of the faults in the network. In the proposed methodology, a simple distance criteria is used to define FtF ruptures and consider single faults or FtF ruptures as an aleatory uncertainty, similarly to UCERF-3. Rates of earthquakes on faults are then computed following two constraints: the magnitude frequency distribution (MFD) of earthquakes in the fault system as a whole must follow an a priori chosen shape and the rate of earthquakes on each fault is determined by the specific slip rate of each segment depending on the possible FtF ruptures. The modeled earthquake rates are then compared to the available independent data (geodetical, seismological and paleoseismological data) in order to weight different hypothesis explored in a logic tree.The methodology is tested on the western Corinth rift (WCR), Greece, where recent advancements have been made in the understanding of the geological slip rates of the complex network of normal faults which are accommodating the ˜ 15 mm yr-1 north

  4. [Assessment of the new CKD-EPI equation to estimate the glomerular filtration rate].

    Science.gov (United States)

    Montañés Bermúdez, R; Bover Sanjuán, J; Oliver Samper, A; Ballarín Castán, J A; Gràcia García, S

    2010-01-01

    A recent report by the CKD-EPI Chronic Kidney Disease Epidemiology Collaboration) group describes a new equation to estimate the glomerular filtration rate (GFR). This equation has been developed from a population of 8,254 subjects who had the GFR measured by iothalamate clearance (mean 68 mL/min/1.73 m2, SD 40 mL/min/1.73 m2). It includes variables such as serum creatinine, age, sex and race with different formula according to race, sex and creatinine value. The CKD-EPI equation improved the accuracy and precision results of the current first-choice MDRD-IDMS (Modification of Diet in Renal Disease-Isotopic Dilution Mass Spectrometry) formula, specially for GFR > 60 mL/min/1.73 m2 in a group of 3,896 subjects. The goal of our study was to compare the estimated GFR by using the new equation CKD-EPI with MDRD-IDMS in a wide cohort of 14,427 patients (5,234 women and 9,193 men), and to analyze the impact of the new CKD-EPI formula on the staging of patients with CKD. Mean estimated GFR was 0.6 mL/min/1.73 m2 higher with CKD-EPI as compared to MDRD-IDMS for the whole group, 1.9 mL/min/1.73 m2 higher for women and 0.2 mL/min/1.73 m2 lower for men. The percentage of CKD staging concordancy between equations varied from 79.4 % for stage 3A and 98.6% for stage 5. For those patients younger than 70 years, 18.9 % and 24 % MDRD-IDMS stages 3B and 3A were reclassified as CKD 3A and 2 by CKD-EPI, respectively. For the same stages in the group younger than 70 years, the percentage of reclassified patients increased up to 34.4% and 33.4%, respectively. The new CKD-EPI equation to estimate the GFR reclassifies an important number of patients to higher CKD stages (higher GFR), specially younger women, classified as CKD stage 3 by MDRD-IDMS.

  5. Estimation of geopotential from satellite-to-satellite range rate data: Numerical results

    Science.gov (United States)

    Thobe, Glenn E.; Bose, Sam C.

    1987-01-01

    A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.

  6. Kerfdr: a semi-parametric kernel-based approach to local false discovery rate estimation

    Directory of Open Access Journals (Sweden)

    Robin Stephane

    2009-03-01

    Full Text Available Abstract Background The use of current high-throughput genetic, genomic and post-genomic data leads to the simultaneous evaluation of a large number of statistical hypothesis and, at the same time, to the multiple-testing problem. As an alternative to the too conservative Family-Wise Error-Rate (FWER, the False Discovery Rate (FDR has appeared for the last ten years as more appropriate to handle this problem. However one drawback of FDR is related to a given rejection region for the considered statistics, attributing the same value to those that are close to the boundary and those that are not. As a result, the local FDR has been recently proposed to quantify the specific probability for a given null hypothesis to be true. Results In this context we present a semi-parametric approach based on kernel estimators which is applied to different high-throughput biological data such as patterns in DNA sequences, genes expression and genome-wide association studies. Conclusion The proposed method has the practical advantages, over existing approaches, to consider complex heterogeneities in the alternative hypothesis, to take into account prior information (from an expert judgment or previous studies by allowing a semi-supervised mode, and to deal with truncated distributions such as those obtained in Monte-Carlo simulations. This method has been implemented and is available through the R package kerfdr via the CRAN or at http://stat.genopole.cnrs.fr/software/kerfdr.

  7. Aspartic acid racemization rate in narwhal (Monodon monoceros) eye lens nuclei estimated by counting of growth layers in tusks

    DEFF Research Database (Denmark)

    Garde, Eva; Heide-Jørgensen, Mads Peter; Ditlevsen, Susanne

    2012-01-01

    Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros) because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR......) technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid D/L ratios in eye lens nuclei against growth layer groups in tusks (n=9). Two racemization rates were...... rate and (D/L)0 value be used in future AAR age estimation studies of narwhals, but also recommend the collection of tusks and eyes of narwhals for further improving the (D/L)0 and 2kAsp estimates obtained in this study....

  8. Effects of Various Blowout Panel Configurations on the Structural Response of LANL Building 16-340 to Internal Explosions

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jason P. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)

    2005-09-01

    The risk of accidental detonation is present whenever any type of high explosives processing activity is performed. These activities are typically carried out indoors to protect processing equipment from the weather and to hide possibly secret processes from view. Often, highly strengthened reinforced concrete buildings are employed to house these activities. These buildings may incorporate several design features, including the use of lightweight frangible blowout panels, to help mitigate blast effects. These panels are used to construct walls that are durable enough to withstand the weather, but are of minimal weight to provide overpressure relief by quickly moving outwards and creating a vent area during an accidental explosion. In this study the behavior of blowout panels under various blast loading conditions was examined. External loadings from explosions occurring in nearby rooms were of primary interest. Several reinforcement systems were designed to help blowout panels resist failure from external blast loads while still allowing them to function as vents when subjected to internal explosions. The reinforcements were studied using two analytical techniques, yield-line analysis and modal analysis, and the hydrocode AUTODYN. A blowout panel reinforcement design was created that could prevent panels from being blown inward by external explosions. This design was found to increase the internal loading of the building by 20%, as compared with nonreinforced panels. Nonreinforced panels were found to increase the structural loads by 80% when compared to an open wall at the panel location.

  9. KABAM Version 1.0 User's Guide and Technical Documentation - Appendix H - Methods for Estimating Metabolism Rate Constant

    Science.gov (United States)

    Appendix H of KABAM Version 1.0 documentation related to estimating the metabolism rate constant. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.

  10. Detection of "punctuated equilibrium" by bayesian estimation of speciation and extinction rates, ancestral character states, and rates of anagenetic and cladogenetic evolution on a molecular phylogeny.

    Science.gov (United States)

    Bokma, Folmer

    2008-11-01

    Algorithms are presented to simultaneously estimate probabilities of speciation and extinction, rates of anagenetic and cladogenetic phenotypic evolution, as well as ancestral character states, from a complete ultrametric species-level phylogeny with dates assigned to all bifurcations and one or more phenotypes in three or more extant species, using Metropolis-Hastings Markov Chain Monte Carlo sampling. The algorithms also estimate missing phenotypes of extant species and numbers of speciation events that occurred on all branches of the phylogeny. The algorithms are discussed and their performance is evaluated using simulated data. That evaluation shows that precise estimation of rates of evolution of one or a few phenotypes requires large phylogenies. Estimation accuracy improves with the number of species on the phylogeny.

  11. How Many Responses Do We Need? Using Generalizability Analysis to Estimate Minimum Necessary Response Rates for Online Student Evaluations.

    Science.gov (United States)

    Gerbase, Margaret W; Germond, Michèle; Cerutti, Bernard; Vu, Nu V; Baroffio, Anne

    2015-01-01

    CONSTRUCT: The study compares paper and online ratings of instructional units and analyses, with the G-study using the symmetry principle, the response rates needed to ensure acceptable precision of the measure when compliance is low. Students' ratings of teaching contribute to the quality of medical training programs. To date, many schools have replaced pen-and-paper questionnaires with electronic forms, despite the lower response rates consistently reported with the latter. Few available studies have examined the effects of low response rates on the reliability and precision of the evaluation measure. Moreover, the minimum number of raters to target when response rates are low remains unclear. Descriptive data were derived from 799 students' paper and online ratings of 11 preclinical instructional units (PIUs). Reliability was assessed by Cronbach's alpha coefficients. The generalizability method applying the symmetry principle approach was used to analyze the precision of the measure with a reference standard error of mean (SEM) set at 0.10; optimization models were built to estimate minimum response rates. Overall, response rates were 74% and 30% (p Higher SEM levels and significantly larger 95% confidence intervals of PIUs rating scores were observed with online evaluations. To keep the SEM within preset limits of precision, a minimum of 48% response rate was estimated for online formats. The proposed generalizability analysis allowed estimating the minimum response needed to maintain acceptable precision in online evaluations. The effects of response rates on accuracy are discussed.

  12. Approaches for the Direct estimation of rate of increase in population size (λ) using capture-recapture data

    Science.gov (United States)

    James D. Nichols; Scott T. Sillett; James E. Hines; Richard T. Holmes

    2005-01-01

    Recent developments in the modeling of capture-recapture data permit the direct estimation and modeling of population growth rate Pradel (1996). Resulting estimates reflect changes in numbers of birds on study areas, and such changes result from movement as well as survival and reproductive recruitment. One measure of the “importance” of a...

  13. Blow-out limits of nonpremixed turbulent jet flames in a cross flow at atmospheric and sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang

    2015-07-22

    The blow-out limits of nonpremixed turbulent jet flames in cross flows were studied, especially concerning the effect of ambient pressure, by conducting experiments at atmospheric and sub-atmospheric pressures. The combined effects of air flow and pressure were investigated by a series of experiments conducted in an especially built wind tunnel in Lhasa, a city on the Tibetan plateau where the altitude is 3650 m and the atmospheric pressure condition is naturally low (64 kPa). These results were compared with results obtained from a wind tunnel at standard atmospheric pressure (100 kPa) in Hefei city (altitude 50 m). The size of the fuel nozzles used in the experiments ranged from 3 to 8 mm in diameter and propane was used as the fuel. It was found that the blow-out limit of the air speed of the cross flow first increased (“cross flow dominant” regime) and then decreased (“fuel jet dominant” regime) as the fuel jet velocity increased in both pressures; however, the blow-out limit of the air speed of the cross flow was much lower at sub-atmospheric pressure than that at standard atmospheric pressure whereas the domain of the blow-out limit curve (in a plot of the air speed of the cross flow versus the fuel jet velocity) shrank as the pressure decreased. A theoretical model was developed to characterize the blow-out limit of nonpremixed jet flames in a cross flow based on a Damköhler number, defined as the ratio between the mixing time and the characteristic reaction time. A satisfactory correlation was obtained at relative strong cross flow conditions (“cross flow dominant” regime) that included the effects of the air speed of the cross flow, fuel jet velocity, nozzle diameter and pressure.

  14. Association of estimated glomerular filtration rate with muscle function in older persons who have fallen.

    Science.gov (United States)

    Tap, Lisanne; Boyé, Nicole D A; Hartholt, Klaas A; van der Cammen, Tischa J M; Mattace-Raso, Francesco U S

    2018-03-01

    studies suggest that estimated glomerular filtration rate (eGFR) is less reliable in older persons and that a low serum-creatinine might reflect reduced muscle mass rather than high kidney function. This study investigates the possible relationship between eGFR and multiple elements of physical performance in older fallers. baseline data of the IMPROveFALL-study were examined in participants ≥65 years. Serum-creatinine based eGFR was classified as normal (≥90 ml/min), mildly reduced (60-89 ml/min) or moderately-severely reduced (<60 ml/min). Timed-Up-and-Go-test and Five-Times-Sit-to-Stand-test were used to assess mobility; calf circumference and handgrip strength to assess muscle status. Ancova models adjusted for age, sex, Charlson comorbidity index and body mass index were performed. a total of 578 participants were included. Participants with a normal eGFR had lower handgrip strength than those with a mildly reduced eGFR (-9.5%, P < 0.001) and those with a moderately-severely reduced eGFR (-6.3%, P = 0.033) with mean strengths of 23.4, 25.8 and 24.9 kg, respectively. Participants with a normal eGFR had a smaller calf circumference than those with a mildly reduced eGFR (35.5 versus 36.5 cm, P = 0.006). Mean time to complete the mobility tests did not differ. in this study we found that older fallers with an eGFR ≥ 90 ml/min had smaller calf circumference and up to 10% lower handgrip strength than those with a reduced eGFR. This lower muscle mass is likely to lead to an overestimation of kidney function. This outcome therefore supports the search for biomarkers independent of muscle mass to estimate kidney function in older persons.

  15. A Comparative Study on Fetal Heart Rates Estimated from Fetal Phonography and Cardiotocography

    Directory of Open Access Journals (Sweden)

    Emad A. Ibrahim

    2017-10-01

    Full Text Available The aim of this study is to investigate that fetal heart rates (fHR extracted from fetal phonocardiography (fPCG could convey similar information of fHR from cardiotocography (CTG. Four-channel fPCG sensors made of low cost (<$1 ceramic piezo vibration sensor within 3D-printed casings were used to collect abdominal phonogram signals from 20 pregnant mothers (>34 weeks of gestation. A novel multi-lag covariance matrix-based eigenvalue decomposition technique was used to separate maternal breathing, fetal heart sounds (fHS and maternal heart sounds (mHS from abdominal phonogram signals. Prior to the fHR estimation, the fPCG signals were denoised using a multi-resolution wavelet-based filter. The proposed source separation technique was first tested in separating sources from synthetically mixed signals and then on raw abdominal phonogram signals. fHR signals extracted from fPCG signals were validated using simultaneous recorded CTG-based fHR recordings.The experimental results have shown that the fHR derived from the acquired fPCG can be used to detect periods of acceleration and deceleration, which are critical indication of the fetus' well-being. Moreover, a comparative analysis demonstrated that fHRs from CTG and fPCG signals were in good agreement (Bland Altman plot has mean = −0.21 BPM and ±2 SD = ±3 with statistical significance (p < 0.001 and Spearman correlation coefficient ρ = 0.95. The study findings show that fHR estimated from fPCG could be a reliable substitute for fHR from the CTG, opening up the possibility of a low cost monitoring tool for fetal well-being.

  16. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  17. Estimating glomerular filtration rate in acute coronary syndromes: Different equations, different mortality risk prediction.

    Science.gov (United States)

    Almeida, Inês; Caetano, Francisca; Barra, Sérgio; Madeira, Marta; Mota, Paula; Leitão-Marques, António

    2016-06-01

    Renal dysfunction is a powerful predictor of adverse outcomes in patients hospitalized for acute coronary syndrome. Three new glomerular filtration rate (GFR) estimating equations recently emerged, based on serum creatinine (CKD-EPIcreat), serum cystatin C (CKD-EPIcyst) or a combination of both (CKD-EPIcreat/cyst), and they are currently recommended to confirm the presence of renal dysfunction. Our aim was to analyse the predictive value of these new estimated GFR (eGFR) equations regarding mid-term mortality in patients with acute coronary syndrome, and compare them with the traditional Modification of Diet in Renal Disease (MDRD-4) formula. 801 patients admitted for acute coronary syndrome (age 67.3±13.3 years, 68.5% male) and followed for 23.6±9.8 months were included. For each equation, patient risk stratification was performed based on eGFR values: high-risk group (eGFRformula, the CKD-EPIcyst equation accurately reclassified a significant percentage of patients into more appropriate risk categories (net reclassification improvement index of 11.9% (p=0.003)). The CKD-EPIcyst equation added prognostic power to the Global Registry of Acute Coronary Events (GRACE) score in the prediction of mid-term mortality. The CKD-EPIcyst equation provides a novel and improved method for assessing the mid-term mortality risk in patients admitted for acute coronary syndrome, outperforming the most widely used formula (MDRD-4), and improving the predictive value of the GRACE score. These results reinforce the added value of cystatin C as a risk marker in these patients. © The European Society of Cardiology 2015.

  18. Is 10-second electrocardiogram recording enough for accurately estimating heart rate in atrial fibrillation.

    Science.gov (United States)

    Shuai, Wei; Wang, Xi-Xing; Hong, Kui; Peng, Qiang; Li, Ju-Xiang; Li, Ping; Chen, Jing; Cheng, Xiao-Shu; Su, Hai

    2016-07-15

    At present, the estimation of rest heart rate (HR) in atrial fibrillation (AF) is obtained by apical auscultation for 1min or on the surface electrocardiogram (ECG) by multiplying the number of RR intervals on the 10second recording by six. But the reasonability of 10second ECG recording is controversial. ECG was continuously recorded at rest for 60s to calculate the real rest HR (HR60s). Meanwhile, the first 10s and 30s ECG recordings were used for calculating HR10s (sixfold) and HR30s (twofold). The differences of HR10s or HR30s with the HR60s were compared. The patients were divided into three sub-groups on the HR60s 100bpm. No significant difference among the mean HR10s, HR30s and HR60s was found. A positive correlation existed between HR10s and HR60s or HR30s and HR60s. Bland-Altman plot showed that the 95% reference limits were high as -11.0 to 16.0bpm for HR10s, but for HR30s these values were only -4.5 to 5.2bpm. Among the three subgroups with HR60s 100bpm, the 95% reference limits with HR60s were -8.9 to 10.6, -10.5 to 14.0 and -11.3 to 21.7bpm for HR10s, but these values were -3.9 to 4.3, -4.1 to 4.6 and -5.3 to 6.7bpm for HR30s. As 10s ECG recording could not provide clinically accepted estimation HR, ECG should be recorded at least for 30s in the patients with AF. It is better to record ECG for 60s when the HR is rapid. Copyright © 2016. Published by Elsevier Ireland Ltd.

  19. Deep-sea benthic footprint of the deepwater horizon blowout.

    Directory of Open Access Journals (Sweden)

    Paul A Montagna

    Full Text Available The Deepwater Horizon (DWH accident in the northern Gulf of Mexico occurred on April 20, 2010 at a water depth of 1525 meters, and a deep-sea plume was detected within one month. Oil contacted and persisted in parts of the bottom of the deep-sea in the Gulf of Mexico. As part of the response to the accident, monitoring cruises were deployed in fall 2010 to measure potential impacts on the two main soft-bottom benthic invertebrate groups: macrofauna and meiofauna. Sediment was collected using a multicorer so that samples for chemical, physical and biological analyses could be taken simultaneously and analyzed using multivariate methods. The footprint of the oil spill was identified by creating a new variable with principal components analysis where the first factor was indicative of the oil spill impacts and this new variable mapped in a geographic information system to identify the area of the oil spill footprint. The most severe relative reduction of faunal abundance and diversity extended to 3 km from the wellhead in all directions covering an area about 24 km(2. Moderate impacts were observed up to 17 km towards the southwest and 8.5 km towards the northeast of the wellhead, covering an area 148 km(2. Benthic effects were correlated to total petroleum hydrocarbon, polycyclic aromatic hydrocarbons and barium concentrations, and distance to the wellhead; but not distance to hydrocarbon seeps. Thus, benthic effects are more likely due to the oil spill, and not natural hydrocarbon seepage. Recovery rates in the deep sea are likely to be slow, on the order of decades or longer.

  20. Estimation of Oil Production Rates in Reservoirs Exposed to Focused Vibrational Energy

    KAUST Repository

    Jeong, Chanseok

    2014-01-01

    Elastic wave-based enhanced oil recovery (EOR) is being investigated as a possible EOR method, since strong wave motions within an oil reservoir - induced by earthquakes or artificially generated vibrations - have been reported to improve the production rate of remaining oil from existing oil fields. To date, there are few theoretical studies on estimating how much bypassed oil within an oil reservoir could be mobilized by such vibrational stimulation. To fill this gap, this paper presents a numerical method to estimate the extent to which the bypassed oil is mobilized from low to high permeability reservoir areas, within a heterogeneous reservoir, via wave-induced cross-flow oscillation at the interface between the two reservoir permeability areas. This work uses the finite element method to numerically obtain the pore fluid wave motion within a one-dimensional fluid-saturated porous permeable elastic solid medium embedded in a non-permeable elastic semi-infinite solid. To estimate the net volume of mobilized oil from the low to the high permeability area, a fluid flow hysteresis hypothesis is adopted to describe the behavior at the interface between the two areas. Accordingly, the fluid that is moving from the low to the high permeability areas is assumed to transport a larger volume of oil than the fluid moving in the opposite direction. The numerical experiments were conducted by using a prototype heterogeneous oil reservoir model, subjected to ground surface dynamic loading operating at low frequencies (1 to 50 Hz). The numerical results show that a sizeable amount of oil could be mobilized via the elastic wave stimulation. It is observed that certain wave frequencies are more effective than others in mobilizing the remaining oil. We remark that these amplification frequencies depend on the formation’s elastic properties. This numerical work shows that the wave-based mobilization of the bypassed oil in a heterogeneous oil reservoir is feasible, especially

  1. [Hereditary deafness in Kirov oblast: estimation of the incidence rate and DNA diagnosis in children].

    Science.gov (United States)

    Zinchenko, R A; Osetrova, A A; Sharonova, E I

    2012-04-01

    Genetic analysis of hereditary deafness (HD) has been performed in the city of Kirov and ten rural districts of Kirov oblast (administrative region). The analysis employed the methods used in audiology, medical genetic counseling, and DNA diagnosis. Deafness has been established to be hereditary in 143 children from 100 unrelated families. The incidence rates of isolated and syndromic HDs in the period studied (1995-2001) have been estimated at 1.25 and 0.36 per 1000 newborns, respectively, the total incidence rate of all HD forms being 1.61 per 1000 newborns (1 case per 621 newborns). DNA analysis for the detection of seven frequent mutations in the genes GJB2 (the 35delG, 167delT, 235delC, and M34T mutations), GJB6 (the del(GJB6-D13S1854) and del(GJB6-D13S1830) mutations), and TMC1 (the R34X mutation) has been performed in families with isolated neurosensory deafness. Molecular genetic analysis has detected mutations in 51 children (48.6%); in 54 children (51.4%), no mutations have been found. The following genotypes have been identified in children with HD: 35delG/35delG in 32 probands (30.5%), 35delG/+ in 16 probands (15.2%), 35delG/235delC in 1 proband (0.95%), M34T/+ in 1 proband (0.95%), and M34T/35delG in 1 proband (0.95%). The 167delT mutation has not been found. The frequency of the 35delG mutation in the GJB2 gene has been estimated to be 39.05%. In the group with a family history of HD, mutations have been found in 66.7% of patients; in the group without a family history of HD, in 37.5% of patients. No mutation has been found in the GJB6 or TMC1 gene. Molecular genetic analysis has been performed in a family with clinically diagnosed Treacher Collins-Franceschetti syndrome. Sequencing has been used to find the 748-69C>T polymorphism in intron 6 (in the homozygous state) and the 3635C>G mutation in exon 23 leading to the substitution of glycine for alanine at position 1176 of the amino acid sequence (Ala1176Gly, in the heterozygous state), which have not

  2. Effect of sociality and season on gray wolf (Canis lupus) foraging behavior: implications for estimating summer kill rate.

    Science.gov (United States)

    Metz, Matthew C; Vucetich, John A; Smith, Douglas W; Stahler, Daniel R; Peterson, Rolf O

    2011-03-01

    Understanding how kill rates vary among seasons is required to understand predation by vertebrate species living in temperate climates. Unfortunately, kill rates are only rarely estimated during summer. For several wolf packs in Yellowstone National Park, we used pairs of collared wolves living in the same pack and the double-count method to estimate the probability of attendance (PA) for an individual wolf at a carcass. PA quantifies an important aspect of social foraging behavior (i.e., the cohesiveness of foraging). We used PA to estimate summer kill rates for packs containing GPS-collared wolves between 2004 and 2009. Estimated rates of daily prey acquisition (edible biomass per wolf) decreased from 8.4±0.9 kg (mean ± SE) in May to 4.1±0.4 kg in July. Failure to account for PA would have resulted in underestimating kill rate by 32%. PA was 0.72±0.05 for large ungulate prey and 0.46±0.04 for small ungulate prey. To assess seasonal differences in social foraging behavior, we also evaluated PA during winter for VHF-collared wolves between 1997 and 2009. During winter, PA was 0.95±0.01. PA was not influenced by prey size but was influenced by wolf age and pack size. Our results demonstrate that seasonal patterns in the foraging behavior of social carnivores have important implications for understanding their social behavior and estimating kill rates. Synthesizing our findings with previous insights suggests that there is important seasonal variation in how and why social carnivores live in groups. Our findings are also important for applications of GPS collars to estimate kill rates. Specifically, because the factors affecting the PA of social carnivores likely differ between seasons, kill rates estimated through GPS collars should account for seasonal differences in social foraging behavior.

  3. Effect of sociality and season on gray wolf (Canis lupus foraging behavior: implications for estimating summer kill rate.

    Directory of Open Access Journals (Sweden)

    Matthew C Metz

    Full Text Available BACKGROUND: Understanding how kill rates vary among seasons is required to understand predation by vertebrate species living in temperate climates. Unfortunately, kill rates are only rarely estimated during summer. METHODOLOGY/PRINCIPAL FINDINGS: For several wolf packs in Yellowstone National Park, we used pairs of collared wolves living in the same pack and the double-count method to estimate the probability of attendance (PA for an individual wolf at a carcass. PA quantifies an important aspect of social foraging behavior (i.e., the cohesiveness of foraging. We used PA to estimate summer kill rates for packs containing GPS-collared wolves between 2004 and 2009. Estimated rates of daily prey acquisition (edible biomass per wolf decreased from 8.4±0.9 kg (mean ± SE in May to 4.1±0.4 kg in July. Failure to account for PA would have resulted in underestimating kill rate by 32%. PA was 0.72±0.05 for large ungulate prey and 0.46±0.04 for small ungulate prey. To assess seasonal differences in social foraging behavior, we also evaluated PA during winter for VHF-collared wolves between 1997 and 2009. During winter, PA was 0.95±0.01. PA was not influenced by prey size but was influenced by wolf age and pack size. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that seasonal patterns in the foraging behavior of social carnivores have important implications for understanding their social behavior and estimating kill rates. Synthesizing our findings with previous insights suggests that there is important seasonal variation in how and why social carnivores live in groups. Our findings are also important for applications of GPS collars to estimate kill rates. Specifically, because the factors affecting the PA of social carnivores likely differ between seasons, kill rates estimated through GPS collars should account for seasonal differences in social foraging behavior.

  4. Estimated glomerular filtration rate changes in patients with chronic myeloid leukemia treated with tyrosine kinase inhibitors.

    Science.gov (United States)

    Yilmaz, Musa; Lahoti, Amit; O'Brien, Susan; Nogueras-González, Graciela M; Burger, Jan; Ferrajoli, Alessandra; Borthakur, Gautam; Ravandi, Farhad; Pierce, Sherry; Jabbour, Elias; Kantarjian, Hagop; Cortes, Jorge E

    2015-11-01

    Chronic use of tyrosine kinase inhibitors (TKIs) may lead to previously unrecognized adverse events. This study evaluated the incidence of acute kidney injury (AKI) and chronic kidney disease (CKD) in chronic-phase (CP) chronic myeloid leukemia (CML) patients treated with imatinib, dasatinib, and nilotinib. Four hundred sixty-eight newly diagnosed CP CML patients treated with TKIs were analyzed. The molecular and cytogenetic response data, creatinine, and glomerular filtration rate (GFR) were followed from the start of therapy to the last follow-up (median, 52 months). GFR was estimated with the Modification of Diet in Renal Disease equation. Nineteen patients (4%) had TKI-associated AKI. Imatinib was associated with a higher incidence of AKI in comparison with dasatinib and nilotinib (P = .014). Fifty-eight patients (14%) developed CKD while they were receiving a TKI; 49 of these patients (84%) did so while they were being treated with imatinib (P < .001). Besides imatinib, age, a history of hypertension, and diabetes mellitus were also associated with the development of CKD. In patients with no CKD at the baseline, imatinib was shown to reduce GFR over time. Interestingly, imatinib did not cause a significant decline in the GFRs of patients with a history of CKD. Imatinib, dasatinib, and nilotinib increased the mean GFR after 3 months of treatment, and nilotinib led with the most significant increase (P < .001). AKI or CKD had no significant impact on overall cytogenetic and molecular response rates or survival. The administration of TKIs may be safe in the setting of CKD in CP CML patients, but close monitoring is still warranted. © 2015 American Cancer Society.

  5. Current use of equations for estimating glomerular filtration rate in Spanish laboratories.

    Science.gov (United States)

    Gràcia-Garcia, Sílvia; Montañés-Bermúdez, Rosario; Morales-García, Luis J; Díez-de Los Ríos, M José; Jiménez-García, Juan Á; Macías-Blanco, Carlos; Martínez-López, Rosalina; Ruiz-Altarejos, Joaquín; Ruiz-Martín, Guadalupe; Sanz-Hernández, Sonia; Ventura-Pedret, Salvador

    2012-07-17

    In 2006 the Spanish Society of Clinical Biochemistry and Molecular Pathology (SEQC) and the Spanish Society of Nephrology (S.E.N.) developed a consensus document in order to facilitate the diagnosis and monitoring of chronic kidney disease with the incorporation of equations for estimating glomerular filtration rate (eGFR) into laboratory reports. The current national prevalence of eGFR reporting and the degree of adherence to these recommendations among clinical laboratories is unknown. We administered a national survey in 2010-11 to Spanish clinical laboratories. The survey was through e-mail or telephone to laboratories that participated in the SEQC’s Programme for External Quality Assurance, included in the National Hospitals Catalogue 2010, including both primary care and private laboratories. A total of 281 laboratories answered to the survey. Of these, 88.2% reported on the eGFR, with 61.9% reporting on the MDRD equation and 31.6% using the MDRD-IDMS equation. A total of 42.5% of laboratories always reported serum creatinine values, and other variables only when specifically requested. Regarding the way results were presented, 46.2% of laboratories reported the exact numerical value only when the filtration rate was below 60mL/min/1.73m2, while 50.6% reported all values regardless. In 56.3% of the cases reporting eGFR, an interpretive commentary of it was enclosed. Although a high percentage of Spanish laboratories have added eGFR in their reports, this metric is not universally used. Moreover, some aspects, such as the equation used and the correct expression of eGFR results, should be improved.

  6. Discounting the distant future-Data on Australian discount rates estimated by a stochastic interest rate model.

    Science.gov (United States)

    Truong, Chi; Trück, Stefan

    2017-04-01

    Data on certainty equivalent discount factors and discount rates for stochastic interest rates in Australia are provided in this paper. The data has been used for the analysis of investments into climate adaptation projects in ׳It׳s not now or never: Implications of investment timing and risk aversion on climate adaptation to extreme events ׳ (Truong and Trück, 2016) [3] and can be used for other cost-benefit analysis studies in Australia. The data is of particular interest for the discounting of projects that create monetary costs and benefits in the distant future.

  7. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Directory of Open Access Journals (Sweden)

    M. D. Peláez-Coca

    2013-01-01

    Full Text Available A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}% and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%. The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration.

  8. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Science.gov (United States)

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  9. Comparing VO2max determined by using the relation between heart rate and accelerometry with submaximal estimated VO2max.

    Science.gov (United States)

    Tönis, T M; Gorter, K; Vollenbroek-Hutten, M M R; Hermens, H

    2012-08-01

    An exploratory study to identify parameters that can be used for estimating a subject's cardio-respiratory physical fitness level, expressed as VO2max, from a combination of heart rate and 3D accelerometer data. Data were gathered from 41 healthy subjects (23 male, 18 female) aged between 20 and 29 years. The measurement protocol consisted of a sub-maximal single stage treadmill walking test for VO2max estimation followed by a walking test at two different speeds (4 and 5.5 kmh-1) for parameter determination. The relation between measured heart rate and accelerometer output at different walking speeds was used to get an indication of exercise intensity and the corresponding heart rate at that intensity. Regression analysis was performed using general subject measures (age, gender, weight, length, BMI) and intercept and slope of the relation between heart rate and accelerometer output during walking as independent variables to estimate the VO2max. A linear regression model using a combination of the slope and intercept parameters, together with gender revealed the highest percentage of explained variance (R2 = 0.90) and had a standard error of the estimate (SEE) of 2.052 mL O2kg-1min-1 with VO2max. Results are comparable with current commonly used sub-maximal laboratory tests to estimate VO2max. The combination of heart rate and accelerometer data seems promising for ambulant estimation of VO2max-.

  10. A comparison of estimated glomerular filtration rates using Cockcroft-Gault and the Chronic Kidney Disease Epidemiology Collaboration estimating equations in HIV infection

    DEFF Research Database (Denmark)

    Mocroft, A; Nielsen, Lene Ryom; Reiss, P

    2014-01-01

    The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end......-stage renal disease (ESRD)....

  11. Burst Format Design for Optimum Joint Estimation of Doppler-Shift and Doppler-Rate in Packet Satellite Communications

    Directory of Open Access Journals (Sweden)

    Luca Giugno

    2007-05-01

    Full Text Available This paper considers the problem of optimizing the burst format of packet transmission to perform enhanced-accuracy estimation of Doppler-shift and Doppler-rate of the carrier of the received signal, due to relative motion between the transmitter and the receiver. Two novel burst formats that minimize the Doppler-shift and the Doppler-rate Cramér-Rao bounds (CRBs for the joint estimation of carrier phase/Doppler-shift and of the Doppler-rate are derived, and a data-aided (DA estimation algorithm suitable for each optimal burst format is presented. Performance of the newly derived estimators is evaluated by analysis and by simulation, showing that such algorithms attain their relevant CRBs with very low complexity, so that they can be directly embedded into new-generation digital modems for satellite communications at low SNR.

  12. A new approach to estimate the in situ fractional degradation rate of organic matter and nitrogen in wheat yeast concentrates

    NARCIS (Netherlands)

    De Jonge, L. H.; Van Laar, H.; Hendriks, W. H.; Dijkstra, J.

    2015-01-01

    In the classic in situ method, small particles are removed during rinsing and hence their fractional degradation rate cannot be determined. A new approach was developed to estimate the fractional degradation rate of nutrients in small particles. This approach was based on an alternative rinsing

  13. The relationship between nasalance scores and nasality ratings obtained with equal appearing interval and direct magnitude estimation scaling methods.

    Science.gov (United States)

    Brancamp, Tami U; Lewis, Kerry E; Watterson, Thomas

    2010-11-01

    To assess the nasalance/nasality relationship and Nasometer test sensitivity and specificity when nasality ratings are obtained with both equal appearing interval (EAI) and direct magnitude estimation (DME) scaling procedures. To test the linearity of the relationship between nasality ratings obtained from different perceptual scales. STIMULI: Audio recordings of the Turtle Passage. Participants' nasalance scores and audio recordings were obtained simultaneously. A single judge rated the samples for nasality using both EAI and DME scaling procedures. Thirty-nine participants 3 to 17 years of age. Across participants, resonance ranged from normal to severely hypernasal. Nasalance scores and two nasality ratings. The magnitude of the correlation between nasalance scores and EAI ratings of nasality (r  =  .63) and between nasalance and DME ratings of nasality (r  =  .59) was not significantly different. Nasometer test sensitivity and specificity for EAI-rated nasality were .71 and .73, respectively. For DME-rated nasality, sensitivity and specificity were .62 and .70, respectively. Regression of EAI nasality ratings on DME nasality ratings did not depart significantly from linearity. No difference was found in the relationship between nasalance and nasality when nasality was rated using EAI as opposed to DME procedures. Nasometer test sensitivity and specificity were similar for EAI- and DME-rated nasality. A linear model accounted for the greatest proportion of explained variance in EAI and DME ratings. Consequently, clinicians should be able to obtain valid and reliable estimates of nasality using EAI or DME.

  14. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei

    2014-06-01

    Orbital errors, characterized typically as longwavelength artifacts, commonly exist in interferometric synthetic aperture radar (InSAR) imagery as a result of inaccurate determination of the sensor state vector. Orbital errors degrade the precision of multitemporal InSAR products (i.e., ground deformation). Although research on orbital error reduction has been ongoing for nearly two decades and several algorithms for reducing the effect of the errors are already in existence, the errors cannot always be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long-wavelength ground motion signal from the orbital error even when the two types of signals exhibit similar spatial patterns. The proposed algorithm is efficient and requires no ground control points. In addition, the method is built upon wrapped phases of interferograms, eliminating the need of phase unwrapping. The performance of the proposed model is validated using both simulated and real data sets. The demo codes of the proposed model are also provided for reference. © 2013 IEEE.

  15. Prediction of hospital mortality by changes in the estimated glomerular filtration rate (eGFR).

    LENUS (Irish Health Repository)

    Berzan, E

    2015-03-01

    Deterioration of physiological or laboratory variables may provide important prognostic information. We have studied whether a change in estimated glomerular filtration rate (eGFR) value calculated using the (Modification of Diet in Renal Disease (MDRD) formula) over the hospital admission, would have predictive value. An analysis was performed on all emergency medical hospital episodes (N = 61964) admitted between 1 January 2002 and 31 December 2011. A stepwise logistic regression model examined the relationship between mortality and change in renal function from admission to discharge. The fully adjusted Odds Ratios (OR) for 5 classes of GFR deterioration showed a stepwise increased risk of 30-day death with OR\\'s of 1.42 (95% CI: 1.20, 1.68), 1.59 (1.27, 1.99), 2.71 (2.24, 3.27), 5.56 (4.54, 6.81) and 11.9 (9.0, 15.6) respectively. The change in eGFR during a clinical episode, following an emergency medical admission, powerfully predicts the outcome.

  16. A Comparative Study on Fetal Heart Rates Estimated from Fetal Phonography and Cardiotocography

    Science.gov (United States)

    Ibrahim, Emad A.; Al Awar, Shamsa; Balayah, Zuhur H.; Hadjileontiadis, Leontios J.; Khandoker, Ahsan H.

    2017-01-01

    The aim of this study is to investigate that fetal heart rates (fHR) extracted from fetal phonocardiography (fPCG) could convey similar information of fHR from cardiotocography (CTG). Four-channel fPCG sensors made of low cost (34 weeks of gestation). A novel multi-lag covariance matrix-based eigenvalue decomposition technique was used to separate maternal breathing, fetal heart sounds (fHS) and maternal heart sounds (mHS) from abdominal phonogram signals. Prior to the fHR estimation, the fPCG signals were denoised using a multi-resolution wavelet-based filter. The proposed source separation technique was first tested in separating sources from synthetically mixed signals and then on raw abdominal phonogram signals. fHR signals extracted from fPCG signals were validated using simultaneous recorded CTG-based fHR recordings.The experimental results have shown that the fHR derived from the acquired fPCG can be used to detect periods of acceleration and deceleration, which are critical indication of the fetus' well-being. Moreover, a comparative analysis demonstrated that fHRs from CTG and fPCG signals were in good agreement (Bland Altman plot has mean = −0.21 BPM and ±2 SD = ±3) with statistical significance (p monitoring tool for fetal well-being. PMID:29089896

  17. Maize dry matter production and macronutrient extraction model as a new approach for fertilizer rate estimation

    Directory of Open Access Journals (Sweden)

    KARLA V. MARTINS

    Full Text Available ABSTRACT Decision support for nutrient application remains an enigma if based on soil nutrient analysis. If the crop could be used as an auxiliary indicator, the plant nutrient status during different growth stages could complement the soil test, improving the fertilizer recommendation. Nutrient absorption and partitioning in the plant are here studied and described with mathematical models. The objective of this study considers the temporal variation of the nutrient uptake rate, which should define crop needs as compared to the critical content in soil solution. A uniform maize crop was grown to observe dry matter accumulation and nutrient content in the plant. The dry matter accumulation followed a sigmoidal model and the macronutrient content a power model. The maximum nutrient absorption occurred at the R4 growth stage, for which the sap concentration was successfully calculated. It is hoped that this new approach of evaluating nutrient sap concentration will help to develop more rational ways to estimate crop fertilizer needs. This new approach has great potential for on-the-go crop sensor-based nutrient application methods and its sensitivity to soil tillage and management systems need to be examined in following studies. If mathematical model reflects management impact adequately, resources for experiments can be saved.

  18. Estimating dose rates to organs as a function of age following internal exposure to radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Leggett, R.W.; Eckerman, K.F.; Dunning, D.E. Jr.; Cristy, M.; Crawford-Brown, D.J.; Williams, L.R.

    1984-03-01

    The AGEDOS methodology allows estimates of dose rates, as a function of age, to radiosensitive organs and tissues in the human body at arbitrary times during or after internal exposure to radioactive material. Presently there are few, if any, radionuclides for which sufficient metabolic information is available to allow full use of all features of the methodology. The intention has been to construct the methodology so that optimal information can be gained from a mixture of the limited amount of age-dependent, nuclide-specific data and the generally plentiful age-dependent physiological data now available. Moreover, an effort has been made to design the methodology so that constantly accumulating metabolic information can be incorporated with minimal alterations in the AGEDOS computer code. Some preliminary analyses performed by the authors, using the AGEDOS code in conjunction with age-dependent risk factors developed from the A-bomb survivor data and other studies, has indicated that the doses and subsequent risks of eventually experiencing radiogenic cancers may vary substantially with age for some exposure scenarios and may be relatively invariant with age for other scenarios. We believe that the AGEDOS methodology provides a convenient and efficient means for performing the internal dosimetry.

  19. Improved ultrasound transducer positioning by fetal heart location estimation during Doppler based heart rate measurements.

    Science.gov (United States)

    Hamelmann, Paul; Vullings, Rik; Schmitt, Lars; Kolen, Alexander F; Mischi, Massimo; van Laar, Judith O E H; Bergmans, Jan W M

    2017-09-21

    Doppler ultrasound (US) is the most commonly applied method to measure the fetal heart rate (fHR). When the fetal heart is not properly located within the ultrasonic beam, fHR measurements often fail. As a consequence, clinical staff need to reposition the US transducer on the maternal abdomen, which can be a time consuming and tedious task. In this article, a method is presented to aid clinicians with the positioning of the US transducer to produce robust fHR measurements. A maximum likelihood estimation (MLE) algorithm is developed, which provides information on fetal heart location using the power of the Doppler signals received in the individual elements of a standard US transducer for fHR recordings. The performance of the algorithm is evaluated with simulations and in vitro experiments performed on a beating-heart setup. Both the experiments and the simulations show that the heart location can be accurately determined with an error of less than 7 mm within the measurement volume of the employed US transducer. The results show that the developed algorithm can be used to provide accurate feedback on fetal heart location for improved positioning of the US transducer, which may lead to improved measurements of the fHR.

  20. Comparison of estimated glomerular filtration rate values calculated using serum cystatin C and serum creatinine

    Directory of Open Access Journals (Sweden)

    Cevdet Türkyürek

    2015-06-01

    Full Text Available Objective: In this study, we aimed to compare estimated glomerular filtration rate (eGFR formulas based on cystatin C and serum creatinine and investigate whether the formulas can detect the renal damage at microalbuminuric level in diabetic patients. Methods: Totally, 99 type 2 diabetic patients were included and divided into 3 groups according to 24 hour urine albumin levels as normoalbuminuric group (group 1, microalbuminuric group (group 2 and macroalbuminuric group (group 3. Creatinine clearance, Cockcroft-Gault (C-G, Modification of Diet in Renal Disease (MDRD, The Chronic Kidney Disease Epidemiology (CKD-EPI, eGFR1, eGFR2 and eGFR3 levels were calculated using formulas. Results: There were significant differences between group 1-3 and group 2-3, but there was no significant difference between group 1 and 2 in calculated GFR levels. Cystatin C-based formulas were found to have a better correlation with creatinine clearance. Conclusion: As a result, cystatin C-based formulas were found to predict creatinine clearance better than the other calculated GFR formulas in diabetic patients. However none of the formulas can discriminate the renal damage at microalbuminuric level. J Clin Exp Invest 2015; 6 (2: 91-95

  1. Bayesian Estimation of Panel Data Fractional Response Models with Endogeneity: An Application to Standardized Test Rates

    Science.gov (United States)

    Kessler, Lawrence M.

    2013-01-01

    In this paper I propose Bayesian estimation of a nonlinear panel data model with a fractional dependent variable (bounded between 0 and 1). Specifically, I estimate a panel data fractional probit model which takes into account the bounded nature of the fractional response variable. I outline estimation under the assumption of strict exogeneity as…

  2. Rapid estimation of glucosinolate thermal degradation rate constants in leaves of Chinese kale and broccoli (Brassica oleracea) in two seasons.

    Science.gov (United States)

    Hennig, Kristin; Verkerk, Ruud; Bonnema, Guusje; Dekker, Matthijs

    2012-08-15

    Kinetic modeling was used as a tool to quantitatively estimate glucosinolate thermal degradation rate constants. Literature shows that thermal degradation rates differ in different vegetables. Well-characterized plant material, leaves of broccoli and Chinese kale plants grown in two seasons, was used in the study. It was shown that a first-order reaction is appropriate to model glucosinolate degradation independent from the season. No difference in degradation rate constants of structurally identical glucosinolates was found between broccoli and Chinese kale leaves when grown in the same season. However, glucosinolate degradation rate constants were highly affected by the season (20-80% increase in spring compared to autumn). These results suggest that differences in glucosinolate degradation rate constants can be due to variation in environmental as well as genetic factors. Furthermore, a methodology to estimate rate constants rapidly is provided to enable the analysis of high sample numbers for future studies.

  3. Associations of estimated glomerular filtration rate and albuminuria with mortality and renal failure by sex: a meta-analysis

    Science.gov (United States)

    Nitsch, Dorothea; Grams, Morgan; Sang, Yingying; Black, Corri; Cirillo, Massimo; Djurdjev, Ognjenka; Iseki, Kunitoshi; Jassal, Simerjot K; Kimm, Heejin; Kronenberg, Florian; Øien, Cecilia M; Levin, Adeera; Woodward, Mark; Hemmelgarn, Brenda R

    2013-01-01

    Objective To assess for the presence of a sex interaction in the associations of estimated glomerular filtration rate and albuminuria with all-cause mortality, cardiovascular mortality, and end stage renal disease. Design Random effects meta-analysis using pooled individual participant data. Setting 46 cohorts from Europe, North and South America, Asia, and Australasia. Participants 2 051 158 participants (54% women) from general population cohorts (n=1 861 052), high risk cohorts (n=151 494), and chronic kidney disease cohorts (n=38 612). Eligible cohorts (except chronic kidney disease cohorts) had at least 1000 participants, outcomes of either mortality or end stage renal disease of ≥50 events, and baseline measurements of estimated glomerular filtration rate according to the Chronic Kidney Disease Epidemiology Collaboration equation (mL/min/1.73 m2) and urinary albumin-creatinine ratio (mg/g). Results Risks of all-cause mortality and cardiovascular mortality were higher in men at all levels of estimated glomerular filtration rate and albumin-creatinine ratio. While higher risk was associated with lower estimated glomerular filtration rate and higher albumin-creatinine ratio in both sexes, the slope of the risk relationship for all-cause mortality and for cardiovascular mortality were steeper in women than in men. Compared with an estimated glomerular filtration rate of 95, the adjusted hazard ratio for all-cause mortality at estimated glomerular filtration rate 45 was 1.32 (95% CI 1.08 to 1.61) in women and 1.22 (1.00 to 1.48) in men (Pinteractiondisease risk. Conclusions Both sexes face increased risk of all-cause mortality, cardiovascular mortality, and end stage renal disease with lower estimated glomerular filtration rates and higher albuminuria. These findings were robust across a large global consortium. PMID:23360717

  4. Measurement of Carbon Fixation Rates in Leaf Samples — Use of carbon-14 labeled sodium bicarbonate to estimate photosynthetic rates

    OpenAIRE

    sprotocols

    2014-01-01

    Author: David R. Caprette ### Generation of a Light Curve To address the hypothesis concerning photosynthetic efficiency it is necessary to expose sun and shade leaves to a range of light intensities long enough for them to fix significant amounts of carbon. It is necessary to expose identical surface areas under favorable conditions which are identical for all leaves except for light intensity (the experimental variable). A means of measuring the rate of carbon fixation is also neces...

  5. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    Science.gov (United States)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on

  6. Development of the town data base: Estimates of exposure rates and times of fallout arrival near the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, C.B.; McArthur, R.D. [Univ. and Community College System of Nevada, Las Vegas, NV (United States); Hutchinson, S.W. [Mead Johnson Nutritional Group, Evansville, IN (United States)

    1994-09-01

    As part of the U.S. Department of Energy`s Off-Site Radiation Exposure Review Project, the time of fallout arrival and the H+12 exposure rate were estimated for populated locations in Arizona, California, Nevada, and Utah that were affected by fallout from one or more nuclear tests at the Nevada Test Site. Estimates of exposure rate were derived from measured values recorded before and after each test by fallout monitors in the field. The estimate for a given location was obtained by retrieving from a data base all measurements made in the vicinity, decay-correcting them to H+12, and calculating an average. Estimates were also derived from maps produced after most events that show isopleths of exposure rate and time of fallout arrival. Both sets of isopleths on these maps were digitized, and kriging was used to interpolate values at the nodes of a 10-km grid covering the pattern. The values at any location within the grid were then estimated from the values at the surrounding grid nodes. Estimates of dispersion (standard deviation) were also calculated. The Town Data Base contains the estimates for all combinations of location and nuclear event for which the estimated mean H+12 exposure rate was greater than three times background. A listing of the data base is included as an appendix. The information was used by other project task groups to estimate the radiation dose that off-site populations and individuals may have received as a result of exposure to fallout from Nevada nuclear tests.

  7. Role of Negative Orbit Vector in Orbital Blow-Out Fractures.

    Science.gov (United States)

    Choi, Soo Youn; Lee, Hwa; Baek, Sehyun

    2017-11-01

    Negative orbit vector is defined as the most anterior globe portion protrudes past the malar eminence. The aim of the study was to evaluate the relationship between negative orbit vector and blow-out fracture location analyzing the distance between the anterior corneal surface and orbital bone with facial soft tissue in medial and orbital floor blow out fractures using orbital computed tomography (CT). Seventy-seven patients diagnosed with blow-out fractures involving the medial or orbital floor were included. Distances from the anterior cornea to lower lid fat, inferior orbital wall, inferior orbital rim, and anterior cheek mass were measured using orbital CT scans. The proportion of negative orbit vector and measured distanced were compared between medial wall fracture and orbital floor fracture. Medical records including age, sex, concomitant ophthalmic diagnosis, and nature of injury were retrospectively reviewed. Forty-three eyes from 43 patients diagnosed with medial wall fracture and 34 eyes from 34 patients diagnosed with orbital floor fracture were included. There was no significant difference in the distance from the anterior cornea to lower lid fat (P = 0.574), inferior orbital wall (P = 0.494), or orbital rim (P = 0.685). The distance from anterior cornea to anterior cheek mass was significantly different in medial wall fracture (-0.19 ± 3.49 mm) compared with orbital floor fracture (-1.69 ± 3.70 mm), P = 0.05. Negative orbit vector was significantly higher in orbital floor fracture patients (24 among 34 patients, 70.6%) compared with those with medial wall fractures (19 among 43 patients, 44.2%) (P = 0.04). Patients presenting with a negative orbit vector relationship when the most anterior portion of globe protruded past the anterior cheek mass and malar eminence were more likely to develop orbital floor fracture than medial wall fracture.

  8. Estimating Population Turnover Rates by Relative Quantification Methods Reveals Microbial Dynamics in Marine Sediment.

    Science.gov (United States)

    Kevorkian, Richard; Bird, Jordan T; Shumaker, Alexander; Lloyd, Karen G

    2018-01-01

    The difficulty involved in quantifying biogeochemically significant microbes in marine sediments limits our ability to assess interspecific interactions, population turnover times, and niches of uncultured taxa. We incubated surface sediments from Cape Lookout Bight, North Carolina, USA, anoxically at 21°C for 122 days. Sulfate decreased until day 68, after which methane increased, with hydrogen concentrations consistent with the predicted values of an electron donor exerting thermodynamic control. We measured turnover times using two relative quantification methods, quantitative PCR (qPCR) and the product of 16S gene read abundance and total cell abundance (FRAxC, which stands for "fraction of read abundance times cells"), to estimate the population turnover rates of uncultured clades. Most 16S rRNA reads were from deeply branching uncultured groups, and ∼98% of 16S rRNA genes did not abruptly shift in relative abundance when sulfate reduction gave way to methanogenesis. Uncultured Methanomicrobiales and Methanosarcinales increased at the onset of methanogenesis with population turnover times estimated from qPCR at 9.7 ± 3.9 and 12.6 ± 4.1 days, respectively. These were consistent with FRAxC turnover times of 9.4 ± 5.8 and 9.2 ± 3.5 days, respectively. Uncultured Syntrophaceae , which are possibly fermentative syntrophs of methanogens, and uncultured Kazan-3A-21 archaea also increased at the onset of methanogenesis, with FRAxC turnover times of 14.7 ± 6.9 and 10.6 ± 3.6 days. Kazan-3A-21 may therefore either perform methanogenesis or form a fermentative syntrophy with methanogens. Three genera of sulfate-reducing bacteria, Desulfovibrio , Desulfobacter , and Desulfobacterium , increased in the first 19 days before declining rapidly during sulfate reduction. We conclude that population turnover times on the order of days can be measured robustly in organic-rich marine sediment, and the transition from sulfate-reducing to methanogenic conditions stimulates

  9. Greenhouse gas emission rate estimates from airborne remote sensing in the short-wave infrared

    Energy Technology Data Exchange (ETDEWEB)

    Krings, Thomas

    2013-01-30

    The quantification of emissions of the greenhouse gases carbon dioxide (CO{sub 2}) and methane (CH{sub 4}) is essential for attributing the roles of anthropogenic activity and natural phenomena in global climate change. The current measurement systems and networks, whilst having improved during the last decades, are deficient in many respects. For example, the emissions from localised and point sources such as fossil fuel exploration sites are not readily assessed. A tool developed to better understand point sources of CO{sub 2} and CH{sub 4} is the optical remote sensing instrument MAMAP, operated from aircraft. With a ground scene size of the order of 50m and a relative accuracy of the column-averaged dry air mole fractions of about 0.3% for XCO{sub 2} and less than 0.4% for XCH{sub 4}, MAMAP can make a significant contribution in this respect. Detailed sensitivity studies showed that the modified WFM-DOAS retrieval algorithm used for MAMAP has an approximate accuracy of about 0.24% for XCH{sub 4} and XCO{sub 2} in typical atmospheric conditions. At the example of CO{sub 2} plumes from two different power plants and CH{sub 4} plumes from coal mine ventilation shafts, two inversion approaches to obtain emission rates were developed and tested. One is based on an optimal estimation scheme to fit Gaussian plume models from multiple sources to the data and the other is based on a simple Gaussian integral method. Compared to CO{sub 2} emission estimates as reported by the power plants' operator within the framework of emission databases (24 and 13 MtCO{sub 2} yr{sup -1}), the results of the individual inversion techniques were within ±10% with uncertainties of ±20-30% mainly due to insufficient wind information and non-stationary atmospheric conditions. Measurements at the coal mine included on-site wind observations by an aircraft turbulence probe that could be utilised to calibrate the wind model. In this case, the inversion results have a bias of less than 1

  10. Probabilistic exposure assessment model to estimate aseptic-UHT product failure rate.

    Science.gov (United States)

    Pujol, Laure; Albert, Isabelle; Magras, Catherine; Johnson, Nicholas Brian; Membré, Jeanne-Marie

    2015-01-02

    Aseptic-Ultra-High-Temperature (UHT) products are manufactured to be free of microorganisms capable of growing in the food at normal non-refrigerated conditions at which the food is likely to be held during manufacture, distribution and storage. Two important phases within the process are widely recognised as critical in controlling microbial contamination: the sterilisation steps and the following aseptic steps. Of the microbial hazards, the pathogen spore formers Clostridium botulinum and Bacillus cereus are deemed the most pertinent to be controlled. In addition, due to a relatively high thermal resistance, Geobacillus stearothermophilus spores are considered a concern for spoilage of low acid aseptic-UHT products. A probabilistic exposure assessment model has been developed in order to assess the aseptic-UHT product failure rate associated with these three bacteria. It was a Modular Process Risk Model, based on nine modules. They described: i) the microbial contamination introduced by the raw materials, either from the product (i.e. milk, cocoa and dextrose powders and water) or the packaging (i.e. bottle and sealing component), ii) the sterilisation processes, of either the product or the packaging material, iii) the possible recontamination during subsequent processing of both product and packaging. The Sterility Failure Rate (SFR) was defined as the sum of bottles contaminated for each batch, divided by the total number of bottles produced per process line run (10(6) batches simulated per process line). The SFR associated with the three bacteria was estimated at the last step of the process (i.e. after Module 9) but also after each module, allowing for the identification of modules, and responsible contamination pathways, with higher or lower intermediate SFR. The model contained 42 controlled settings associated with factory environment, process line or product formulation, and more than 55 probabilistic inputs corresponding to inputs with variability

  11. A Systematic Review and Meta-Analysis Estimating the Expected Dropout Rates in Randomized Controlled Trials on Yoga Interventions

    Directory of Open Access Journals (Sweden)

    Holger Cramer

    2016-01-01

    Full Text Available A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs. Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73% in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p=0.03. For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%. The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants’ sociodemographic and health condition.

  12. Clinical predictors of the estimated glomerular filtration rate 1 year after radical nephrectomy in Japanese patients

    Directory of Open Access Journals (Sweden)

    Shuichi Shimada

    2017-07-01

    Full Text Available Purpose: To evaluate renal function 1 year after radical nephrectomy (RN for renal cell carcinoma, the preoperative predictors of postnephrectomy renal function were investigated by sex, and equations to predict the estimated glomerular filtration rate (eGFR 1 year after RN were developed. Materials and Methods: A total of 525 patients who underwent RN between May 2007 and August 2011 at Tohoku University Hospital and its affiliated hospitals were prospectively evaluated. Overall, 422 patients were analyzed in this study. Results: Independent preoperative factors associated with postnephrectomy renal function were different in males and females. Preoperative eGFR, age, tumor size, and body mass index (BMI were independent factors in males, while tumor size and BMI were not independent factors in females. The equations developed to predict eGFR 1 year after RN were: Predicted eGFR in males (mL/min/1.73 m2 =27.99-(0.196×age+(0.497×eGFR+(0.744×tumor size-(0.339×BMI; and predicted eGFR in females=44.57- (0.275×age+(0.298×eGFR. The equations were validated in the validation dataset (R2 =0.63, p<0.0001 and R2 =0.31, p<0.0001, respectively. Conclusions: The developed equations by sex enable better prediction of eGFR 1 year after RN. The equations will be useful for preoperative patient counseling and selection of the type of surgical procedure in elective partial or RN cases.

  13. An epidemiology-based model to estimate the rate of inappropriateness of tumor marker requests.

    Science.gov (United States)

    Gion, Massimo; Franceschini, Roberta; Rosin, Claudia; Trevisiol, Chiara; Peloso, Lucia; Zappa, Marco; Fabricio, Aline S C

    2014-06-01

    Appropriateness of tumor markers (TMs) has been retrospectively studied in limited patients' series, matching the requests to clinical records. Methods to monitor appropriateness suitable for use on a large scale are required. This study aims to establish and validate an innovative model to estimate appropriateness based on the comparison between the number of TMs requested and the expected requests inferred from epidemiological data. The number of CA15.3, CA19.9 and CA125 requests theoretically expected according to the epidemiology of malignancies in a known geographic area (2 Italian regions) was compared with the number of TMs actually requested - the surveyed requests projected on a regional scale - during a given time span (1 year). The expected number of requests was calculated comparing TMs recommended by guidelines in different clinical scenarios with the prevalence or incidence figures of the examined diseases (carcinomas of breast, pancreas and biliary tract, ovary and endometrium). Suitability of the model was demonstrated with the analysis of 1,891,070 TM requests surveyed in 66 laboratories from Veneto and Tuscany regions. The percentage difference over the total of expected TMs (delta%) ranged from -6.9% for CA15.3 to +1022.6% for CA19.9 in Veneto and from +35.7% for CA15.3 to +1842.6% for CA19.9 in Tuscany. The presented model was effective in demonstrating higher than expected TM request rates, possibly associated with inappropriate use. Moreover, it can be applied on a large scale survey setting since it circumvents the unavailability of clinical information on test orders.

  14. Correlation of Glomerular Filtration Rate Between Renal Scan and Estimation Equation for Patients With Scleroderma.

    Science.gov (United States)

    Suebmee, Patcharawan; Foocharoen, Chingching; Mahakkanukrauh, Ajanee; Suwannaroj, Siraphop; Theerakulpisut, Daris; Nanagara, Ratanavadee

    2016-08-01

    Renal involvement in scleroderma is life-threatening. Early detection of a deterioration of the glomerular filtration rate (GFR) is needed to preserve kidney function. To (A) determine the correlation between (1) estimated GFR (eGFR) using 4 different formulae and (2) measured GFR (mGFR) using isotopic renal scan in Thai patients with scleroderma with normal serum creatinine and (B) to define the factors influencing eGFR. A cross-sectional study was performed in adult Thai patients with scleroderma at Srinagarind Hospital, Khon Kaen University, between December 2013 and April 2015. GFR was measured using the gold standard Tc-99m DTPA (Tc-99m diethylenetriaminepentaacetic acid) renal scan. We compared the latter with the eGFR, calculated using the Cockroft-Gault formula, Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation and creatinine clearance equation. A total of 76 patients with scleroderma (50 women and 26 men) with median age 54.8 years (interquartile range: 47.4 to 58.9) were enrolled. Mean disease duration was 5.6 ± 4.5 years. Median value of mGFR was 100.1 ± 27.6mL/minute/1.73m². There was a correlation between mGFR from the Tc-99m DTPA renal scan and the eGFR using the Cockroft-Gault formula, MDRD and CKD-EPI equation (P = 0.01, formula, MDRD study equation and CKD-EPI were useful formulae for assessing GFR in Thai patients with scleroderma. Higher SBP was associated with a lower GFR. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  15. A new serum cystatin C formula for estimating glomerular filtration rate in newborns.

    Science.gov (United States)

    Treiber, Milena; Pečovnik Balon, Breda; Gorenjak, Maksimiljan

    2015-08-01

    The levels of serum cystatin C (CysC) and creatinine (Cr) were determined in small-for-gestational-age (SGA) babies and compared with those for normal term newborns appropriate for gestational age (AGA), at birth and 3 days later. We then compared a number of cysC-based, Cr-based and combined formulas for estimation of glomerular filtration rate (GFR) with the neonatal reference GFR. Fifty full-term SGA and 50 AGA newborns were enrolled in the study. Kidney volume measurements were performed by ultrasound for each newborn. At birth, the mean level of CysC in SGA babies was 1.48 ± 0.30 mg/l in cord blood and 1.38 ± 0.18 mg/l in day 3 blood samples, and the mean Cr level, determined simultaneously, was 67.08 ± 17.62 and 55.62 ± 14.91 μmol/l, respectively. These levels did not differ significantly from those determined in AGA babies. A 10 % reduction in kidney volume was associated with an increase in CysC value of 9.3 % in cord blood. The Cr-based and Schwartz-combined equations underestimated GFR relative to CysC-based and Zappitelli-based equations at birth and 3 days later. A newly constructed Cys-C based formula which includes kidney volume and body surface area in the calculations for GFR is a reliable marker of GFR compared with neonatal reference clearance values.

  16. CONSERVATION DEVELOPMENT OF TIMOR DEER (Cervus timorensis AS COMMERCIAL PURPOSE(WITH OPTIMISTIC RATE ESTIMATION

    Directory of Open Access Journals (Sweden)

    N. Hanani

    2012-09-01

    Full Text Available The aim of this research was to determine the profit obtained from breeding of Timor deer commercially. This research was done in East Java. Survey method was used to answer the objective. The study location were selected by purposive sampling. Usually deer was develop in conservation area, but because the area was decrease so the number of deer also decrease. Model of deer raising development should be improved not only for conservation but also for commercial purpose. The optimum deer raising were considered and monitored with a purpose to maximize commercial Timor deer by using Multiple Objective Goal Programming (MOGP to find the Optimistic Rate Estimation. The result of this study showed to get the optimum benefit, it had to be applied together with conservation and commercial effort at the same time. Results of study showed that profit was taken from selling velvet was 164.46%. Profits taken from selling antler was 350.56%, from selling alive deer was 394.28%, from selling recreation tickets was 259.08%, from selling venison1 was 135.98%, and from selling deer leather was 141.24%. Operational cost spent were 168.46% for feeding cost, 213.23% for maintenance cost, and 232.04% for labors’ salaries. The amount of operational cost required in MOGP model, with lower expenses and commercial priority were 185.54% for feeding cost, 253.13% for maintenance cost, and 246.95% for paying labors’ salaries. The MOGP model result with commercial priority reached 335.21%, while in MOGP model with lower costs and commercial priority gave profit for breeders up to 381.26%.

  17. CONSERVATION DEVELOPMENT OF TIMOR DEER (Cervus timorensis AS COMMERCIAL PURPOSE(WITH OPTIMISTIC RATE ESTIMATION

    Directory of Open Access Journals (Sweden)

    S.I. Santoso

    2014-10-01

    Full Text Available The aim of this research was to determine the profit obtained from breeding of Timor deercommercially. This research was done in East Java. Survey method was used to answer the objective.The study location were selected by purposive sampling. Usually deer was develop in conservation area,but because the area was decrease so the number of deer also decrease. Model of deer raisingdevelopment should be improved not only for conservation but also for commercial purpose. Theoptimum deer raising were considered and monitored with a purpose to maximize commercial Timordeer by using Multiple Objective Goal Programming (MOGP to find the Optimistic Rate Estimation.The result of this study showed to get the optimum benefit, it had to be applied together withconservation and commercial effort at the same time. Results of study showed that profit was taken fromselling velvet was 164.46%. Profits taken from selling antler was 350.56%, from selling alive deer was394.28%, from selling recreation tickets was 259.08%, from selling venison1 was 135.98%, and fromselling deer leather was 141.24%. Operational cost spent were 168.46% for feeding cost, 213.23% formaintenance cost, and 232.04% for labors’ salaries. The amount of operational cost required in MOGPmodel, with lower expenses and commercial priority were 185.54% for feeding cost, 253.13% formaintenance cost, and 246.95% for paying labors’ salaries. The MOGP model result with commercialpriority reached 335.21%, while in MOGP model with lower costs and commercial priority gave profitfor breeders up to 381.26%.

  18. [Carotid intima-media thickness and estimated glomerular filtration rate in hypertensive patients].

    Science.gov (United States)

    Yang, Pingting; Yuan, Hong; Weng, Chunyan; Wang, Yaqin; Cao, Xia; Chen, Zhiheng

    2014-05-01

    To determine the association between carotid atherosclerosis and renal function in hypertensive patients. A total of 2 809 hypertensive patients aged (56.59±10.79) years were enrolled. Carotid intima-media thickness (cIMT) was derived via B-mode ultrasonography and chronic kidney disease (CKD) was evaluated by the estimated glomerular filtration rate (eGFR) with Cockcroft- Gault method. The patients were divided into 3 groups: a normal group, a thick group, and a plaque group according to the results of carotid ultrasonography. The eGFR of the normal group was (111.09±25.61) mL/(min.1.73m(2)), that of the thick group and the plaque group was (94.45±27.14) mL/(min.1.73m(2)) and (85.98±26.92) mL/ (min.1.73m(2)). Binary logistic analysis showed that age (OR=3.590), smoking status (OR=1.543), systolic blood pressure (OR=1.018), diastolic blood pressure (OR=0.977), fasting plasma glucose (OR=1.132), triglyceride (OR=0.873) and eGFR (OR=0.986) were significantly correlated with cIMT. Subgroup analyses on different genders showed that eGFR was a significant independent risk factor in men (OR=0.991) but not in women. The thicker the cIMT, the lower the eGFR in hypertensive patients. With the development of cIMT, eGFR gradually decreases and contributes to the occurrence and development of early-stage atherosclerosis in hypertensive patients.

  19. A new approach to the "apparent survival" problem: estimating true survival rates from mark-recapture studies.

    Science.gov (United States)

    Gilroy, James J; Virzi, Thomas; Boulton, Rebecca L; Lockwood, Julie L

    2012-07-01

    Survival estimates generated from live capture-mark-recapture studies may be negatively biased due to the permanent emigration of marked individuals from the study area. In the absence of a robust analytical solution, researchers typically sidestep this problem by simply reporting estimates using the term "apparent survival." Here, we present a hierarchical Bayesian multistate model designed to estimate true survival by accounting for predicted rates of permanent emigration. Initially we use dispersal kernels to generate spatial projections of dispersal probability around each capture location. From these projections, we estimate emigration probability for each marked individual and use the resulting values to generate bias-adjusted survival estimates from individual capture histories. When tested using simulated data sets featuring variable detection probabilities, survival rates, and dispersal patterns, the model consistently eliminated negative biases shown by apparent survival estimates from standard models. When applied to a case study concerning juvenile survival in the endangered Cape Sable Seaside Sparrow (Ammodramus maritimus mirabilis), bias-adjusted survival estimates increased more than twofold above apparent survival estimates. Our approach is applicable to any capture-mark-recapture study design and should be particularly valuable for organisms with dispersive juvenile life stages.

  20. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  1. Aspartic acid racemization rate in narwhal (Monodon monoceros eye lens nuclei estimated by counting of growth layers in tusks

    Directory of Open Access Journals (Sweden)

    Eva Garde

    2012-11-01

    Full Text Available Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid d/l ratios in eye lens nuclei against growth layer groups in tusks (n=9. Two racemization rates were estimated: one by linear regression (r2=0.98 based on the assumption that age was known without error, and one based on a bootstrap study, taking into account the uncertainty in the age estimation (r2 between 0.88 and 0.98. The two estimated 2kAsp values were identical up to two significant figures. The 2k Asp value from the bootstrap study was found to be 0.00229±0.000089 SE, which corresponds to a racemization rate of 0.00114−yr±0.000044 SE. The intercept of 0.0580±0.00185 SE corresponds to twice the (d/l0 value, which is then 0.0290±0.00093 SE. We propose that this species-specific racemization rate and (d/l0 value be used in future AAR age estimation studies of narwhals, but also recommend the collection of tusks and eyes of narwhals for further improving the (d/l0 and 2kAsp estimates obtained in this study.

  2. The Use of Temperature Profiles Through Unsaturated Soils to Estimate Short-term Rates of Natural Groundwater Recharge

    Science.gov (United States)

    Dripps, W. R.; Anderson, M. P.; Hunt, R. J.

    2001-05-01

    It has long been recognized that infiltration influences the vertical subsurface temperature profile. Many researchers have used changes in the temperature profile to quantify the rate of groundwater flow in saturated systems, e.g. in wetlands and streambeds. Others have considered coupled heat and flow transport through the unsaturated zone, but we are aware of only two groups (Taniguchi and Sharma, 1993; Tabbagh et al., 1999) who have previously used temperature profiles through the unsaturated zone to estimate rates of areally extensive groundwater recharge. Both groups looked at seasonal changes in soil temperature to estimate annual recharge rates, but no attempt was made to analyze individual recharge events. We collected hourly soil temperature measurements through the unsaturated zone at depths of 0.05, 0.2, 0.5, 1, and 3 meters at a site in the Trout Lake basin of northern Wisconsin for the past two years. VS2DH (Healy and Ronan, 1996), a two-dimensional, numerical, coupled heat and water flow model for variably saturated media, was used to simulate the thermocouple data and estimate rates of recharge for individual recharge events. Field studies including Guelph permeameter tests, slug tests, grain size analyses, bulk density estimates, and lab measurements of soil moisture characteristic curves were used to estimate the necessary hydraulic and thermal parameters for the model. The field data were supplemented by values reported in the literature, and further refined during model calibration. Recharge rates computed using temperature data were compared with estimates based on water level fluctuations and estimates obtained using simple water balance modeling.

  3. A practical approach to parameter estimation applied to model predicting heart rate regulation

    DEFF Research Database (Denmark)

    Olufsen, Mette; Ottesen, Johnny T.

    2013-01-01

    . Knowledge of variation in parameters within and between groups of subjects have potential to provide insight into biological function. Often it is not possible to estimate all parameters in a given model, in particular if the model is complex and the data is sparse. However, it may be possible to estimate...

  4. Comparing Exapotranspiration Rates Estimated from Atmosphiric Flux and TDR Soil Moisture Measurements

    DEFF Research Database (Denmark)

    Schelde, Kirsten; Ringgaard, Rasmus; Herbst, Mathias

    2011-01-01

    Measurements of water vapor fluxes using eddy covariance (EC) and measurements of root zone soil moisture depletion using time domain reflectometry (TDR) represent two independent approaches to estimating evapotranspiration. This study investigated the possibility of using TDR to provide a lower...... limit estimate (disregarding dew evaporation) of evapotranspiration on dry days. During a period of 7 wk, the two independent measuring techniques were applied in a barley (Hordeum vulgare L.) field, and six dry periods were identified. Measurements of daily root zone soil moisture depletion were...... compared with daily estimates of water vapor loss. During the first dry periods, agreement between the two approaches was good, with average daily deviation between estimates below 1.0 mm d−1 Toward the end of the measurement period, the estimates of the two techniques tended to deviate due to different...

  5. Kinetic Glomerular Filtration Rate Estimation Compared With Other Formulas for Evaluating Acute Kidney Injury Stage Early After Kidney Donation.

    Science.gov (United States)

    Hekmat, Reza; Eshraghi, Hamid; Esmailpour, Maryam; Hassankhani, Golnaz Ghayyem

    2017-02-01

    Kinetic glomerular filtration rate estimation may have more power and versatility than the Modification of Diet in Renal Disease or Cockcroft-Gault formula for evaluating kidney function when plasma creatinine fluctuates rapidly. After kidney donation, glomerular filtration rate rapidly fluctuates in otherwise healthy patients. We compared 3 formulas for estimating glomerular filtration rate: kinetic, Modification of Diet in Renal Disease, and Cockcroft-Gault, for determining stages of acute kidney injury early after kidney donation. In 42 living kidney donors, we measured serum creatinine, cystatin C, neutrophil gelatinase-associated lipocalin, and glomerular filtration rates before uninephrectomy and 3 days afterward. To estimate glomerular filtration rate, we used Cockcroft-Gault, Modification of Diet in Renal Disease, and kinetic equations. We sought the most accurate formula for staging acute kidney injury according to the risk, injury, failure, loss, and end-stage criteria. The kinetic glomerular filtration rate model found more cases of stage 3 acute kidney injury than did the Modification