WorldWideScience

Sample records for estimating blowout rates

  1. Guidelines for estimating blowout rates and duration for environmental risk analysis purposes; Retningslinjer for beregning av utblaasningsrater og -varighet til bruk ved analyse av miljoerisiko

    Energy Technology Data Exchange (ETDEWEB)

    Nilsen, Thomas

    2007-01-15

    Risk assessment in relation to possible blowouts of oil and condensate is the main topic in this analysis. The estimated risk is evaluated against criteria for acceptable risk and is part of the foundation for dimensioning in oil spill preparedness. The report aims to contribute to the standardisation of the terminology, methodology and documentation for estimations of blowout rates, and thus to simplify communication of the analysis results and strengthen the decision makers' trust in these (ml)

  2. Well blowout rates in California Oil and Gas District 4--Update and Trends

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Preston D.; Benson, Sally M.

    2009-10-01

    Well blowouts are one type of event in hydrocarbon exploration and production that generates health, safety, environmental and financial risk. Well blowouts are variously defined as 'uncontrolled flow of well fluids and/or formation fluids from the wellbore' or 'uncontrolled flow of reservoir fluids into the wellbore'. Theoretically this is irrespective of flux rate and so would include low fluxes, often termed 'leakage'. In practice, such low-flux events are not considered well blowouts. Rather, the term well blowout applies to higher fluxes that rise to attention more acutely, typically in the order of seconds to days after the event commences. It is not unusual for insurance claims for well blowouts to exceed US$10 million. This does not imply that all blowouts are this costly, as it is likely claims are filed only for the most catastrophic events. Still, insuring against the risk of loss of well control is the costliest in the industry. The risk of well blowouts was recently quantified from an assembled database of 102 events occurring in California Oil and Gas District 4 during the period 1991 to 2005, inclusive. This article reviews those findings, updates them to a certain extent and compares them with other well blowout risk study results. It also provides an improved perspective on some of the findings. In short, this update finds that blowout rates have remained constant from 2005 to 2008 within the limits of resolution and that the decline in blowout rates from 1991 to 2005 was likely due to improved industry practice.

  3. Risk assessment for SAGD well blowouts

    Energy Technology Data Exchange (ETDEWEB)

    Worth, D.; Alhanati, F.; Lastiwka, M. [C-FER Technologies, Edmonton, AB (Canada); Crepin, S. [Petrocedeno, Caracas (Venezuela)

    2008-10-15

    This paper discussed a steam assisted gravity drainage (SAGD) pilot project currently being conducted in Venezuela's Orinoco Belt. A risk assessment was conducted as part of the pilot program in order to evaluate the use of single barrier completions in conjunction with a blowout response plan. The study considered 3 options: (1) an isolated double barrier completion with a downhole safety valve (DHSV) in the production tubing string and a packer in the production casing annulus; (2) a partially isolated completion with no DHSV and a packer in the production casing annulus; and (3) an open single barrier completion with no additional downhole barriers. A reservoir model was used to assess the blowout flowing potential of SAGD well pairs. The probability of a blowout was estimated using fault tree analysis techniques. Risk was determined for various blowout scenarios, including blowouts during normal and workover operations, as well as blowouts through various flow paths. Total risk for each completion scenario was also determined at 3 different time periods within the production life of the well pair. The possible consequences of a blowout were assessed using quantitative consequence models. Results of the study showed that environmental and economic risks were much higher for the open completion technique. Higher risks were also associated with the earlier life of the completion strings. 20 refs., 3 tabs., 19 figs.

  4. Offshore Blowouts, Causes and Trends

    Energy Technology Data Exchange (ETDEWEB)

    Holand, P

    1996-02-01

    The main objective of this doctoral thesis was to establish an improved design basis for offshore installations with respect to blowout risk analyses. The following sub objectives are defined: (1) Establish an offshore blowout database suitable for risk analyses, (2) Compare the blowout risk related to loss of lives with the total offshore risk and risk in other industries, (3) Analyse blowouts with respect to parameters that are important to describe and quantify blowout risk that has been experienced to be able to answer several questions such as under what operations have blowouts occurred, direct causes, frequency of occurrence etc., (4) Analyse blowouts with respect to trends. The research strategy applied includes elements from both survey strategy and case study strategy. The data are systematized in the form of a new database developed from the MARINTEK database. Most blowouts in the analysed period occurred during drilling operations. Shallow gas blowouts were more frequent than deep blowouts and workover blowouts occurred more often than deep development drilling blowouts. Relatively few blowouts occurred during completion, wireline and normal production activities. No significant trend in blowout occurrences as a function of time could be observed, except for completion blowouts that showed a significantly decreasing trend. But there were trends regarding some important parameters for risk analyses, e.g. the ignition probability has decreased and diverter systems have improved. Only 3.5% of the fatalities occurred because of blowouts. 106 refs., 51 figs., 55 tabs.

  5. Well blowout rates and consequences in California Oil and Gas District 4 from 1991 to 2005: Implications for geological storage of carbon dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Preston; Jordan, Preston D.; Benson, Sally M.

    2008-05-15

    Well blowout rates in oil fields undergoing thermally enhanced recovery (via steam injection) in California Oil and Gas District 4 from 1991 to 2005 were on the order of 1 per 1,000 well construction operations, 1 per 10,000 active wells per year, and 1 per 100,000 shut-in/idle and plugged/abandoned wells per year. This allows some initial inferences about leakage of CO2 via wells, which is considered perhaps the greatest leakage risk for geological storage of CO2. During the study period, 9% of the oil produced in the United States was from District 4, and 59% of this production was via thermally enhanced recovery. There was only one possible blowout from an unknown or poorly located well, despite over a century of well drilling and production activities in the district. The blowout rate declined dramatically during the study period, most likely as a result of increasing experience, improved technology, and/or changes in safety culture. If so, this decline indicates the blowout rate in CO2-storage fields can be significantly minimized both initially and with increasing experience over time. Comparable studies should be conducted in other areas. These studies would be particularly valuable in regions with CO2-enhanced oil recovery (EOR) and natural gas storage.

  6. Computed tomograms of blowout fracture

    International Nuclear Information System (INIS)

    Ito, Haruhide; Hayashi, Minoru; Shoin, Katsuo; Hwang, Wen-Zern; Yamamoto, Shinjiro; Yonemura, Taizo.

    1985-01-01

    We studied 18 cases of orbital fractures, excluding optic canal fracture. There were 11 cases of pure blowout fracture and 3 of the impure type. The other 4 cases were orbital fractures without blowout fracture. The cardinal syndromes were diplopia, enophthalmos, and sensory disturbances of the trigeminal nerve in the pure type of blowout fracture. Many cases of the impure type of blowout fracture or of orbital fracture showed black eyes or a swelling of the eyelids which masked enophthalmos. Axial and coronal CT scans demonstrated: 1) the orbital fracture, 2) the degree of enophthalmos, 3) intraorbital soft tissue, such as incarcerated or prolapsed ocular muscles, 4) intraorbital hemorrhage, 5) the anatomical relation of the orbital fracture to the lacrimal canal, the trochlea, and the trigeminal nerve, and 6) the lesions of the paranasal sinus and the intracranial cavity. CT scans play an important role in determining what surgical procedures might best be employed. Pure blowout fractures were classified by CT scans into these four types: 1) incarcerating linear fracture, 2) trapdoor fracture, 3) punched-out fracture, and 4) broad fracture. Cases with severe head injury should be examined to see whether or not blowout fracture is present. If the patients are to hope to return to society, a blowout fracture should be treated as soon as possible. (author)

  7. Computed tomograms of blowout fracture

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Haruhide; Hayashi, Minoru; Shoin, Katsuo; Hwang, Wen-Zern; Yamamoto, Shinjiro; Yonemura, Taizo

    1985-02-01

    We studied 18 cases of orbital fractures, excluding optic canal fracture. There were 11 cases of pure blowout fracture and 3 of the impure type. The other 4 cases were orbital fractures without blowout fracture. The cardinal syndromes were diplopia, enophthalmos, and sensory disturbances of the trigeminal nerve in the pure type of blowout fracture. Many cases of the impure type of blowout fracture or of orbital fracture showed black eyes or a swelling of the eyelids which masked enophthalmos. Axial and coronal CT scans demonstrated: 1) the orbital fracture, 2) the degree of enophthalmos, 3) intraorbital soft tissue, such as incarcerated or prolapsed ocular muscles, 4) intraorbital hemorrhage, 5) the anatomical relation of the orbital fracture to the lacrimal canal, the trochlea, and the trigeminal nerve, and 6) the lesions of the paranasal sinus and the intracranial cavity. CT scans play an important role in determining what surgical procedures might best be employed. Pure blowout fractures were classified by CT scans into these four types: 1) incarcerating linear fracture, 2) trapdoor fracture, 3) punched-out fracture, and 4) broad fracture. Cases with severe head injury should be examined to see whether or not blowout fracture is present. If the patients are to hope to return to society, a blowout fracture should be treated as soon as possible. (author).

  8. Review of flow rate estimates of the Deepwater Horizon oil spill

    OpenAIRE

    McNutt, Marcia K.; Camilli, Rich; Crone, Timothy J.; Guthrie, George D.; Hsieh, Paul A.; Ryerson, Thomas B.; Savas, Omer; Shaffer, Frank

    2011-01-01

    The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ ...

  9. Availability analysis of subsea blowout preventer using Markov model considering demand rate

    Directory of Open Access Journals (Sweden)

    Sunghee Kim

    2014-12-01

    Full Text Available Availabilities of subsea Blowout Preventers (BOP in the Gulf of Mexico Outer Continental Shelf (GoM OCS is investigated using a Markov method. An updated β factor model by SINTEF is used for common-cause failures in multiple redundant systems. Coefficient values of failure rates for the Markov model are derived using the β factor model of the PDS (reliability of computer-based safety systems, Norwegian acronym method. The blind shear ram preventer system of the subsea BOP components considers a demand rate to reflect reality more. Markov models considering the demand rate for one or two components are introduced. Two data sets are compared at the GoM OCS. The results show that three or four pipe ram preventers give similar availabilities, but redundant blind shear ram preventers or annular preventers enhance the availability of the subsea BOP. Also control systems (PODs and connectors are contributable components to improve the availability of the subsea BOPs based on sensitivity analysis.

  10. Simulation of a SAGD well blowout using a reservoir-wellbore coupled simulator

    Energy Technology Data Exchange (ETDEWEB)

    Walter, J.; Vanegas, P.; Cunha, L.B. [Alberta Univ., Edmonton, AB (Canada); Worth, D.J. [C-FER Technologies, Edmonton, AB (Canada); Crepin, S. [Petrocedeno, Caracas (Venezuela)

    2008-10-15

    Single barrier completion systems are typically used in SAGD projects due to the lack of equipment suitable for high temperature SAGD downhole environments. This study used a wellbore and reservoir coupled thermal simulator tool to investigate the blowout behaviour of a steam assisted gravity drainage (SAGD) well pair when the safety barrier has failed. Fluid flow pressure drop through the wellbore and heat losses between the wellbore and the reservoir were modelled using a discretized wellbore option and a semi-analytical model. The fully coupled mechanistic model accounted for the simultaneous transient pressure and temperature variations along the wellbore and the reservoir. The simulations were used to predict flowing potential and fluid compositions of both wells in a SAGD well pair under various flowing conditions. Blowout scenarios were created for 3 different points in the well pair's life. Three flow paths during the blowout were evaluated for both the production and injection wells. Results of the study were used to conduct a comparative risk assessment between a double barrier and a single barrier completion. The modelling study confirmed that both the injection and production wells had the potential for blowouts lasting significant periods of time, with liquid rates over 50 times the normal production liquid rates. The model successfully predicted the blowout flow potential of the SAGD well pairs. 8 refs., 3 tabs., 18 figs.

  11. Organized investigation expedites insurance claims following a blowout

    International Nuclear Information System (INIS)

    Armstreet, R.

    1996-01-01

    Various types of insurance policies cover blowouts to different degrees, and a proper understanding of the incident and the coverage can expedite the adjustment process. Every well control incident, and the claim arising therefrom, has a unique set of circumstances which must be analyzed thoroughly. A blowout incident, no matter what size or how severe, can have an emotional impact on all who become involved. Bodily injuries or death of friends and coworkers can result in additional stress following a blowout. Thus, it is important that all parties involved remain mindful of sensitive matters when investigating a blowout. This paper reviews the definition of a blowout based on insurance procedures and claims. It reviews blowout expenses and contractor cost and accepted well control policies. Finally, it reviews the investigation procedures normally followed by an agent and the types of information requested from the operator

  12. Blowout brought under control in Gulf of Mexico

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This paper reports that Greenhill Petroleum Corp., Houston, killed a well blowout Oct. 9 and began cleaning up oil spilled into Timbalier Bay off La Fourche Parish, La. Development well No. 250 in Timbalier Bay field blew out Sept. 29 while Blake Drilling and Workover Co., Belle Chasse, La., was trying to recomplete it in a deeper zone. Fire broke out as Boots and Coots Inc., Houston, was positioning control equipment at the wellhead. State and federal oil spill response officials estimated the uncontrolled flow of well No. 250 at 1,400 b/d of oil. Coast Guard officials on Oct. 8 upgraded the blowout to a major spill, after deciding that at least 2,500 bbl of oil had gone into the water

  13. Numerical simulation of water and sand blowouts when penetrating through shallow water flow formations in deep water drilling

    Science.gov (United States)

    Ren, Shaoran; Liu, Yanmin; Gong, Zhiwu; Yuan, Yujie; Yu, Lu; Wang, Yanyong; Xu, Yan; Deng, Junyu

    2018-02-01

    In this study, we applied a two-phase flow model to simulate water and sand blowout processes when penetrating shallow water flow (SWF) formations during deepwater drilling. We define `sand' as a pseudo-component with high density and viscosity, which can begin to flow with water when a critical pressure difference is attained. We calculated the water and sand blowout rates and analyzed the influencing factors from them, including overpressure of the SWF formation, as well as its zone size, porosity and permeability, and drilling speed (penetration rate). The obtained data can be used for the quantitative assessment of the potential severity of SWF hazards. The results indicate that overpressure of the SWF formation and its zone size have significant effects on SWF blowout. A 10% increase in the SWF formation overpressure can result in a more than 90% increase in the cumulative water blowout and a 150% increase in the sand blowout when a typical SWF sediment is drilled. Along with the conventional methods of well flow and pressure control, chemical plugging, and the application of multi-layer casing, water and sand blowouts can be effectively reduced by increasing the penetration rate. As such, increasing the penetration rate can be a useful measure for controlling SWF hazards during deepwater drilling.

  14. The effects of the lodgepole sour gas well blowout on coniferous tree growth

    International Nuclear Information System (INIS)

    Baker, K.A.

    1991-01-01

    A dendrochronological study was used to evaluate growth impacts on White Spruce (Picea glauca (Moench) resulting from the 1982 Lodgepole sour gas well blowout. Stem analysis was evaluated from four ecologically similar monitoring sites located on a 10 kilometre downwind gradient and compared to a control site. Incremental volume was calculated, standardized using running mean filters and analyzed using one-way ANOVA. Pre and post-blowout growth trends were analyzed between sites and were also evaluated over a height profile in order to assess growth impact variability within individual trees. Growth reductions at the two sites closest the wellhead were statistically significant for five post-blowout years. Growth at these condensate impacted sites was reduced to 9.8% and 38.1% in 1983. Differences in growth reductions reflect a gradient of effects and a dose-response relationship. Recovery of surviving trees has been rapid but is leveling off at approximately 80% of pre-blowout growth. growth reductions were greater and recovery rates slower than those previously predicted by other authors. Statistically significant differences in height profile growth responses were limited to the upper portions of the trees. Growth rates over a tree height profile ranged from 10% less to 50% more than growth rates observed at a 1.3 metres. Analytical methodologies detected and described growth differences over a height profile but a larger sample size was desirable. As is always the case in catastrophic events, obtaining pre-event baseline data is often difficult. Dendrochronological methods described in this paper offer techniques for determining pre-blowout growth and monitoring impacts and recovery in forested areas

  15. Serum albumin is an important prognostic factor for carotid blowout syndrome

    International Nuclear Information System (INIS)

    Lu Hsuehju; Chen Kuowei; Chen Minghuang; Tzeng Chenghwai; Chang Peter Muhsin; Yang Muhhwa; Chu Penyuan; Tai Shyhkuan

    2013-01-01

    Carotid blowout syndrome is a severe complication of head and neck cancer. High mortality and major neurologic morbidity are associated with carotid blowout syndrome with massive bleeding. Prediction of outcomes for carotid blowout syndrome patients is important for clinicians, especially for patients with the risk of massive bleeding. Between 1 January 2001 and 31 December 2011, 103 patients with carotid blowout syndrome were enrolled in this study. The patients were divided into groups with and without massive bleeding. Prognostic factors were analysed with proportional hazard (Cox) regressions for carotid blowout syndrome-related prognoses. Survival analyses were based on the time from diagnosis of carotid blowout syndrome to massive bleeding and death. Patients with massive bleeding were more likely to have hypoalbuminemia (albumin 1000 cells/μl, P=0.041) and hypoalbuminemia (P=0.010) were important to prognosis. Concurrent chemoradiotherapy (P=0.007), elevated lactate dehydrogenase (>250 U/l; P=0.050), local recurrence (P=0.022) and hypoalbuminemia (P=0.038) were related to poor prognosis in carotid blowout syndrome-related death. In multivariate analysis, best supportive care and hypoalbuminemia were independent factors for both carotid blowout syndrome-related massive bleeding (P=0.000) and carotid blowout syndrome-related death (P=0.013), respectively. Best supportive care and serum albumin are important prognostic factors in carotid blowout syndrome. It helps clinicians to evaluate and provide better supportive care for these patients. (author)

  16. Review of flow rate estimates of the Deepwater Horizon oil spill

    Science.gov (United States)

    McNutt, Marcia K.; Camilli, Rich; Crone, Timothy J.; Guthrie, George D.; Hsieh, Paul A.; Ryerson, Thomas B.; Savas, Omer; Shaffer, Frank

    2012-01-01

    The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ observations of the leaking well with a time-dependent flow rate model derived from pressure readings taken after the Macondo well was shut in for the well integrity test. Model simulations also proved valuable for predicting the effect of partial deployment of the blowout preventer rams on flow rate. Taken together, the scientific analyses support flow rates in the range of ~50,000–70,000 barrels/d, perhaps modestly decreasing over the duration of the oil spill, for a total release of ~5.0 million barrels of oil, not accounting for BP's collection effort. By quantifying the amount of oil at different locations (wellhead, ocean surface, and atmosphere), we conclude that just over 2 million barrels of oil (after accounting for containment) and all of the released methane remained in the deep sea. By better understanding the fate of the hydrocarbons, the total discharge can be partitioned into separate components that pose threats to deep sea vs. coastal ecosystems, allowing responders in future events to scale their actions accordingly.

  17. Review of flow rate estimates of the Deepwater Horizon oil spill.

    Science.gov (United States)

    McNutt, Marcia K; Camilli, Rich; Crone, Timothy J; Guthrie, George D; Hsieh, Paul A; Ryerson, Thomas B; Savas, Omer; Shaffer, Frank

    2012-12-11

    The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ observations of the leaking well with a time-dependent flow rate model derived from pressure readings taken after the Macondo well was shut in for the well integrity test. Model simulations also proved valuable for predicting the effect of partial deployment of the blowout preventer rams on flow rate. Taken together, the scientific analyses support flow rates in the range of ∼50,000-70,000 barrels/d, perhaps modestly decreasing over the duration of the oil spill, for a total release of ∼5.0 million barrels of oil, not accounting for BP's collection effort. By quantifying the amount of oil at different locations (wellhead, ocean surface, and atmosphere), we conclude that just over 2 million barrels of oil (after accounting for containment) and all of the released methane remained in the deep sea. By better understanding the fate of the hydrocarbons, the total discharge can be partitioned into separate components that pose threats to deep sea vs. coastal ecosystems, allowing responders in future events to scale their actions accordingly.

  18. Quantitative Risk Assessment (QRA) for an Underground Blowout Scenario in the Gulf of Mexico (GoM) Well

    Science.gov (United States)

    Tyagi, M.; Zulqarnain, M.

    2017-12-01

    Offshore oil and gas exploration and production operations, involve the use of some of the cutting edge and challenging technologies of the modern time. These technological complex operations involves the risk of major accidents as well, which have been demonstrated by disasters such as the explosion and fire on the UK production platform piper alpha, the Canadian semi-submersible drilling rig Ocean Ranger and the explosion and capsizing of Deepwater horizon rig in the Gulf of Mexico. By conducting Quantitative Risk Assessment (QRA), safety of various operations as well as their associated risks and significance during the entire life phase of an offshore project can be quantitatively estimated. In an underground blowout, the uncontrolled formation fluids from higher pressure formation may charge up shallower overlying low pressure formations or may migrate to sea floor. Consequences of such underground blowouts range from no visible damage at the surface to the complete loss of well, loss of drilling rig, seafloor subsidence or hydrocarbons discharged to the environment. These blowouts might go unnoticed until the over pressured sands, which are the result of charging from higher pressure reservoir due to an underground blowout. Further, engineering formulas used to estimate the fault permeability and thickness are very simple in nature and may add to uncertainty in the estimated parameters. In this study the potential of a deepwater underground blowout are assessed during drilling life phase of a well in Popeye-Genesis field reservoir in the Gulf of Mexico to estimate the time taken to charge a shallower zone to its leak-off test (LOT) value. Parametric simulation results for selected field case show that for relatively high permeability (k = 40mD) fault connecting a deep over-pressured zone to a shallower low-pressure zone of similar reservoir volumes, the time to recharge the shallower zone up to its threshold LOT value is about 135 years. If the ratio of the

  19. Fluid Mechanics of Lean Blowout Precursors in Gas Turbine Combustors

    Directory of Open Access Journals (Sweden)

    T. M. Muruganandam

    2012-03-01

    Full Text Available Understanding of lean blowout (LBO phenomenon, along with the sensing and control strategies could enable the gas turbine combustor designers to design combustors with wider operability regimes. Sensing of precursor events (temporary extinction-reignition events based on chemiluminescence emissions from the combustor, assessing the proximity to LBO and using that data for control of LBO has already been achieved. This work describes the fluid mechanic details of the precursor dynamics and the blowout process based on detailed analysis of near blowout flame behavior, using simultaneous chemiluminescence and droplet scatter observations. The droplet scatter method represents the regions of cold reactants and thus help track unburnt mixtures. During a precursor event, it was observed that the flow pattern changes significantly with a large region of unburnt mixture in the combustor, which subsequently vanishes when a double/single helical vortex structure brings back the hot products back to the inlet of the combustor. This helical pattern is shown to be the characteristic of the next stable mode of flame in the longer combustor, stabilized by double helical vortex breakdown (VBD mode. It is proposed that random heat release fluctuations near blowout causes VBD based stabilization to shift VBD modes, causing the observed precursor dynamics in the combustor. A complete description of the evolution of flame near the blowout limit is presented. The description is consistent with all the earlier observations by the authors about precursor and blowout events.

  20. Healthy addiction: blowouts inspire lifelong dedication to roles in dangerous dramas

    Energy Technology Data Exchange (ETDEWEB)

    Jaremko, G.

    2001-05-01

    The development of Safety Boss and Key Safety Blowout Control Ltd, based in Calgary and Red Deer, Alberta, respectively, is chronicled, serving as a background to a discussion of growth of oilwell blowout and emergency response services in the Canadian oilpatch. The rise to prominence began in 1982 with the blowout of the Lodgepole run-away well near Drayton Valley, Alberta, where very high geological pressures drove 300 million cubic feet of natural gas that contained 20 per cent lethal hydrogen sulphide, into the air. The expertise was further honed in Kuwait, where a team of Canadian blowout fighters extinguished hundreds of wells, set ablaze during the Gulf War in 1991. Today, this Canadian expertise is routinely exported to various foreign lands, including the United States, which until recently dominated the field. For example, Key Safety Blowout Control is a designated response organization for Alaska, rubbing shoulders with international giants in the Canadian Arctic, as part of the Mackenzie Delta Integrated Oil Field Services Group. About 60 per cent of Key Safety's bread and butter work is generated by sour gas. The need for blowout services is expected to grow over the next decade as the hunt for gas reaches farther into deep geological formations and the Rocky Mountain Foothills region. The high proportion of lethal hydrogen sulphide in these deep formations and the Foothills region invite escalating criticism from environmentalists, landowners and communities, protests, which in turn create growing demand for blowout services and preventive gear. On the positive side, the growing danger of blowouts and noxious discharges into the air stimulate the development of new technology, such as the new generation of lower explosive limit monitors that identify the presence of any gases liable to blow up. There is also a growing market for 'downwind monitoring units' that detect parts-per billion hydrogen sulphide concentrations. These and other

  1. Development of 3000 m Subsea Blowout Preventer Experimental Prototype

    Science.gov (United States)

    Cai, Baoping; Liu, Yonghong; Huang, Zhiqian; Ma, Yunpeng; Zhao, Yubin

    2017-12-01

    A subsea blowout preventer experimental prototype is developed to meet the requirement of training operators, and the prototype consists of hydraulic control system, electronic control system and small-sized blowout preventer stack. Both the hydraulic control system and the electronic system are dual-mode redundant systems. Each system works independently and is switchable when there are any malfunctions. And it significantly improves the operation reliability of the equipment.

  2. Flux rope breaking and formation of a rotating blowout jet

    Science.gov (United States)

    Joshi, Navin Chandra; Nishizuka, Naoto; Filippov, Boris; Magara, Tetsuya; Tlatov, Andrey G.

    2018-05-01

    We analysed a small flux rope eruption converted into a helical blowout jet in a fan-spine configuration using multiwavelength observations taken by Solar Dynamics Observatory, which occurred near the limb on 2016 January 9. In our study, first, we estimated the fan-spine magnetic configuration with the potential-field calculation and found a sinistral small filament inside it. The filament along with the flux rope erupted upwards and interacted with the surrounding fan-spine magnetic configuration, where the flux rope breaks in the middle section. We observed compact brightening, flare ribbons, and post-flare loops underneath the erupting filament. The northern section of the flux rope reconnected with the surrounding positive polarity, while the southern section straightened. Next, we observed the untwisting motion of the southern leg, which was transformed into a rotating helical blowout jet. The sign of the helicity of the mini-filament matches the one of the rotating jets. This is consistent with recent jet models presented by Adams et al. and Sterling et al. We focused on the fine thread structure of the rotating jet and traced three blobs with the speed of 60-120 km s- 1, while the radial speed of the jet is ˜400 km s- 1. The untwisting motion of the jet accelerated plasma upwards along the collimated outer spine field lines, and it finally evolved into a narrow coronal mass ejection at the height of ˜9Rsun. On the basis of detailed analysis, we discussed clear evidence of the scenario of the breaking of the flux rope and the formation of the helical blowout jet in the fan-spine magnetic configuration.

  3. Development and verification of deep-water blowout models

    International Nuclear Information System (INIS)

    Johansen, Oistein

    2003-01-01

    Modeling of deep-water releases of gas and oil involves conventional plume theory in combination with thermodynamics and mass transfer calculations. The discharges can be understood in terms of multiphase plumes, where gas bubbles and oil droplets may separate from the water phase of the plume and rise to the surface independently. The gas may dissolve in the ambient water and/or form gas hydrates--a solid state of water resembling ice. All these processes will tend to deprive the plume as such of buoyancy, and in stratified water the plume rise will soon terminate. Slick formation will be governed by the surfacing of individual oil droplets in a depth and time variable current. This situation differs from the conditions observed during oil-and-gas blowouts in shallow and moderate water depths. In such cases, the bubble plume has been observed to rise to the surface and form a strong radial flow that contributes to a rapid spreading of the surfacing oil. The theories and behaviors involved in deepwater blowout cases are reviewed and compared to those for the shallow water blowout cases

  4. Stability Control of Vehicle Emergency Braking with Tire Blowout

    OpenAIRE

    Chen, Qingzhang; Liu, Youhua; Li, Xuezhi

    2014-01-01

    For the stability control and slowing down the vehicle to a safe speed after tire failure, an emergency automatic braking system with independent intellectual property is developed. After the system has received a signal of tire blowout, the automatic braking mode of the vehicle is determined according to the position of the failure tire and the motion state of vehicle, and a control strategy for resisting tire blowout additional yaw torque and deceleration is designed to slow down vehicle to...

  5. Blow-out of nonpremixed turbulent jet flames at sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang; Hu, Longhua; Chung, Suk-Ho

    2016-01-01

    Blow-out limits of nonpremixed turbulent jet flames in quiescent air at sub-atmospheric pressures (50–100 kPa) were studied experimentally using propane fuel with nozzle diameters ranging 0.8–4 mm. Results showed that the fuel jet velocity at blow-out limit increased with increasing ambient pressure and nozzle diameter. A Damköhler (Da) number based model was adopted, defined as the ratio of characteristic mixing time and characteristic reaction time, to include the effect of pressure considering the variations in laminar burning velocity and thermal diffusivity with pressure. The critical lift-off height at blow-out, representing a characteristic length scale for mixing, had a linear relationship with the theoretically predicted stoichiometric location along the jet axis, which had a weak dependence on ambient pressure. The characteristic mixing time (critical lift-off height divided by jet velocity) adjusted to the characteristic reaction time such that the critical Damköhler at blow-out conditions maintained a constant value when varying the ambient pressure.

  6. Blow-out of nonpremixed turbulent jet flames at sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang

    2016-12-09

    Blow-out limits of nonpremixed turbulent jet flames in quiescent air at sub-atmospheric pressures (50–100 kPa) were studied experimentally using propane fuel with nozzle diameters ranging 0.8–4 mm. Results showed that the fuel jet velocity at blow-out limit increased with increasing ambient pressure and nozzle diameter. A Damköhler (Da) number based model was adopted, defined as the ratio of characteristic mixing time and characteristic reaction time, to include the effect of pressure considering the variations in laminar burning velocity and thermal diffusivity with pressure. The critical lift-off height at blow-out, representing a characteristic length scale for mixing, had a linear relationship with the theoretically predicted stoichiometric location along the jet axis, which had a weak dependence on ambient pressure. The characteristic mixing time (critical lift-off height divided by jet velocity) adjusted to the characteristic reaction time such that the critical Damköhler at blow-out conditions maintained a constant value when varying the ambient pressure.

  7. Diagnosis of magnetic resonance imaging (MRI) for blowout fracture. Three advantages of MRI

    International Nuclear Information System (INIS)

    Nishida, Yasuhiro; Aoki, Yoshiko; Hayashi, Osamu; Kimura, Makiko; Murata, Toyotaka; Ishida, Youichi; Iwami, Tatsuya; Kani, Kazutaka

    1999-01-01

    Magnetic resonance imaging (MRI) gives a much more detailed picture of the soft tissue than computerized tomography (CT). In blowout fracture cases, it is very easy to observe the incarcerated orbital tissue. We performed MRI in 19 blowout fracture cases. After evaluating the images, we found three advantages of MRI. The first is that even small herniation of the orbital contents can easily be detected because the orbital fatty tissue contrasts well around the other tissues in MRI. The second is that the incarcerated tissues can be clearly differentiated because a clear contrast between the orbital fatty tissue and the extraocular muscle can be seen in MRI. The third is that the running images of the incarcerated muscle belly can be observed because any necessary directional slies can be taken in MRI. These advantages are very important in the diagnosis of blowout fractures. MRI should be employed in blowout fracture cases in addition to CT. (author)

  8. Modeling the key factors that could influence the diffusion of CO2 from a wellbore blowout in the Ordos Basin, China.

    Science.gov (United States)

    Li, Qi; Shi, Hui; Yang, Duoxing; Wei, Xiaochen

    2017-02-01

    Carbon dioxide (CO 2 ) blowout from a wellbore is regarded as a potential environment risk of a CO 2 capture and storage (CCS) project. In this paper, an assumed blowout of a wellbore was examined for China's Shenhua CCS demonstration project. The significant factors that influenced the diffusion of CO 2 were identified by using a response surface method with the Box-Behnken experiment design. The numerical simulations showed that the mass emission rate of CO 2 from the source and the ambient wind speed have significant influence on the area of interest (the area of high CO 2 concentration above 30,000 ppm). There is a strong positive correlation between the mass emission rate and the area of interest, but there is a strong negative correlation between the ambient wind speed and the area of interest. Several other variables have very little influence on the area of interest, e.g., the temperature of CO 2 , ambient temperature, relative humidity, and stability class values. Due to the weather conditions at the Shenhua CCS demonstration site at the time of the modeled CO 2 blowout, the largest diffusion distance of CO 2 in the downwind direction did not exceed 200 m along the centerline. When the ambient wind speed is in the range of 0.1-2.0 m/s and the mass emission rate is in the range of 60-120 kg/s, the range of the diffusion of CO 2 is at the most dangerous level (i.e., almost all Grade Four marks in the risk matrix). Therefore, if the injection of CO 2 takes place in a region that has relatively low perennial wind speed, special attention should be paid to the formulation of pre-planned, emergency measures in case there is a leakage accident. The proposed risk matrix that classifies and grades blowout risks can be used as a reference for the development of appropriate regulations. This work may offer some indicators in developing risk profiles and emergency responses for CO 2 blowouts.

  9. Global properties of symmetric competition models with riddling and blowout phenomena

    Directory of Open Access Journals (Sweden)

    Giant-italo Bischi

    2000-01-01

    Full Text Available In this paper the problem of chaos synchronization, and the related phenomena of riddling, blowout and on–off intermittency, are considered for discrete time competition models with identical competitors. The global properties which determine the different effects of riddling and blowout bifurcations are studied by the method of critical curves, a tool for the study of the global dynamical properties of two-dimensional noninvertible maps. These techniques are applied to the study of a dynamic market-share competition model.

  10. The effect of gas and oil well blowout emissions on livestock in Alberta

    International Nuclear Information System (INIS)

    Beck, B.E.

    1992-01-01

    Poisoning caused by emissions from sour gas well or oil well blowouts is not acute because the gases are diluted by the atmosphere before they reach livestock. Exposure may last a month or more and may produce a syndrome indistinguishable from common disorders of flu, malaise, mood change, and in the case of animals, lack of production or decreased production. Little information is available on the composition of releases from well blowouts, which may change due to concurrent reactions with oxygen and photodecomposition. Effects on livestock observed to results from sour gas plant emissions (mostly sulfur dioxide) include runny eyes in cattle, loss of production, diarrhea and abortion. Blowout emissions may contain oxidant gases as well as hydrogen sulfides. These products irritate mucous membranes, and can lead to pink eye. Respiratory problems may include upper respiratory tract infections, and may produce susceptibility to secondary pneumonia. Abortion, infertility and congenital effects are areas of concern. It is considered unlikely that hydrogen sulfide can cause such effects, however carbon disulfide and carbonyl sulfide, both present in sour gas blowouts, are known to have effects on the fetus. Effects on production and performance are unknown, and it is postulated that amounts of sulfur deposition are insufficient to cause nutrient deficiencies. Psychological reactions are suggested to explain some of the adverse effects of exposure to sour gas. 1 ref

  11. Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.

    Science.gov (United States)

    Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen

    2012-01-01

    An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Flow and sediment transport dynamics in a slot and cauldron blowout and over a foredune, Mason Bay, Stewart Island (Rakiura), NZ

    Science.gov (United States)

    Hesp, Patrick A.; Hilton, Michael; Konlecher, Teresa

    2017-10-01

    This study is the first to simultaneously compare flow and sediment transport through a blowout and over an adjacent foredune, and the first study of flow within a highly sinuous, slot and cauldron blowout. Flow across the foredune transect is similar to that observed in other studies and is primarily modulated by across-dune vegetation density differences. Flow within the blowout is highly complex and exhibits pronounced accelerations and jet flow. It is characterised by marked helicoidal coherent vortices in the mid-regions, and topographically vertically forced flow out of the cauldron portion of the blowout. Instantaneous sediment transport within the blowout is significant compared to transport onto and/or over the adjacent foredune stoss slope and ridge, with the blowout providing a conduit for suspended sediment to reach the downwind foredune upper stoss slope and crest. Medium term (4 months) aeolian sedimentation data indicates sand is accumulating in the blowout entrance while erosion is taking place throughout the majority of the slot, and deposition is occurring downwind of the cauldron on the foredune ridge. The adjacent lower stoss slope of the foredune is accreting while the upper stoss slope is slightly erosional. Longer term (16 months) pot trap data shows that the majority of foredune upper stoss slope and crest accretion occurs via suspended sediment delivery from the blowout, whereas the majority of the suspended sediment arriving to the well-vegetated foredune stoss slope is deposited on the mid-stoss slope. The results of this study indicate one mechanism of how marked alongshore foredune morphological variability evolves due to the role of blowouts in topographically accelerating flow, and delivering significant aeolian sediment downwind to relatively discrete sections of the foredune.

  13. Behavior and dynamics of bubble breakup in gas pipeline leaks and accidental subsea oil well blowouts.

    Science.gov (United States)

    Wang, Binbin; Socolofsky, Scott A; Lai, Chris C K; Adams, E Eric; Boufadel, Michel C

    2018-06-01

    Subsea oil well blowouts and pipeline leaks release oil and gas to the environment through vigorous jets. Predicting the breakup of the released fluids in oil droplets and gas bubbles is critical to predict the fate of petroleum compounds in the marine water column. To predict the gas bubble size in oil well blowouts and pipeline leaks, we observed and quantified the flow behavior and breakup process of gas for a wide range of orifice diameters and flow rates. Flow behavior at the orifice transitions from pulsing flow to continuous discharge as the jet crosses the sonic point. Breakup dynamics transition from laminar to turbulent at a critical value of the Weber number. Very strong pure gas jets and most gas/liquid co-flowing jets exhibit atomization breakup. Bubble sizes in the atomization regime scale with the jet-to-plume transition length scale and follow -3/5 power-law scaling for a mixture Weber number. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A small-scale eruption leading to a blowout macrospicule jet in an on-disk coronal hole

    International Nuclear Information System (INIS)

    Adams, Mitzi; Sterling, Alphonse C.; Moore, Ronald L.; Gary, G. Allen

    2014-01-01

    We examine the three-dimensional magnetic structure and dynamics of a solar EUV-macrospicule jet that occurred on 2011 February 27 in an on-disk coronal hole. The observations are from the Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA) and the SDO Helioseismic and Magnetic Imager (HMI). The observations reveal that in this event, closed-field-carrying cool absorbing plasma, as in an erupting mini-filament, erupted and opened, forming a blowout jet. Contrary to some jet models, there was no substantial recently emerged, closed, bipolar-magnetic field in the base of the jet. Instead, over several hours, flux convergence and cancellation at the polarity inversion line inside an evolved arcade in the base apparently destabilized the entire arcade, including its cool-plasma-carrying core field, to undergo a blowout eruption in the manner of many standard-sized, arcade-blowout eruptions that produce a flare and coronal mass ejection. Internal reconnection made bright 'flare' loops over the polarity inversion line inside the blowing-out arcade field, and external reconnection of the blowing-out arcade field with an ambient open field made longer and dimmer EUV loops on the outside of the blowing-out arcade. That the loops made by the external reconnection were much larger than the loops made by the internal reconnection makes this event a new variety of blowout jet, a variety not recognized in previous observations and models of blowout jets.

  15. A small-scale eruption leading to a blowout macrospicule jet in an on-disk coronal hole

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mitzi; Sterling, Alphonse C.; Moore, Ronald L. [Space Science Office, ZP13, NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Gary, G. Allen, E-mail: mitzi.adams@nasa.gov, E-mail: alphonse.sterling@nasa.gov, E-mail: ron.moore@nasa.gov, E-mail: gag0002@uah.edu [Center for Space Plasma and Aeronomic Research, The University of Alabama in Huntsville, Huntsville, AL 35805, USA. (United States)

    2014-03-01

    We examine the three-dimensional magnetic structure and dynamics of a solar EUV-macrospicule jet that occurred on 2011 February 27 in an on-disk coronal hole. The observations are from the Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA) and the SDO Helioseismic and Magnetic Imager (HMI). The observations reveal that in this event, closed-field-carrying cool absorbing plasma, as in an erupting mini-filament, erupted and opened, forming a blowout jet. Contrary to some jet models, there was no substantial recently emerged, closed, bipolar-magnetic field in the base of the jet. Instead, over several hours, flux convergence and cancellation at the polarity inversion line inside an evolved arcade in the base apparently destabilized the entire arcade, including its cool-plasma-carrying core field, to undergo a blowout eruption in the manner of many standard-sized, arcade-blowout eruptions that produce a flare and coronal mass ejection. Internal reconnection made bright 'flare' loops over the polarity inversion line inside the blowing-out arcade field, and external reconnection of the blowing-out arcade field with an ambient open field made longer and dimmer EUV loops on the outside of the blowing-out arcade. That the loops made by the external reconnection were much larger than the loops made by the internal reconnection makes this event a new variety of blowout jet, a variety not recognized in previous observations and models of blowout jets.

  16. The effects of changing wind regimes on the development of blowouts in the coastal dunes of The Netherlands

    NARCIS (Netherlands)

    Jungerius, P.D.; Witter, J.V.; van Boxel, J.H.

    1991-01-01

    Blowouts are the main features of aeolian activity in many dune areas. To assess the impact of future climatic change on the geomorphological processes prevailing in a dune landscape it is essential to understand blowout formation and identify the meteorological parameters which are important. The

  17. Application on forced traction test in surgeries for orbital blowout fracture

    Directory of Open Access Journals (Sweden)

    Bao-Hong Han

    2014-05-01

    Full Text Available AIM: To discuss the application of forced traction test in surgeries for orbital blowout fracture.METHODS: The clinical data of 28 patients with reconstructive surgeries for orbital fracture were retrospectively analyzed. All patients were treated with forced traction test before/in/after operation. The eyeball movement and diplopia were examined and recorded pre-operation, 3 and 6mo after operation, respectively.RESULTS: Diplopia was improved in all 28 cases with forced traction test. There was significant difference between preoperative and post-operative diplopia at 3 and 6mo(PCONCLUSION: Forced traction test not only have a certain clinical significance in diagnosis of orbital blowout fracture, it is also an effective method in improving diplopia before/in/after operation.

  18. Blowout jets and impulsive eruptive flares in a bald-patch topology

    Science.gov (United States)

    Chandra, R.; Mandrini, C. H.; Schmieder, B.; Joshi, B.; Cristiani, G. D.; Cremades, H.; Pariat, E.; Nuevo, F. A.; Srivastava, A. K.; Uddin, W.

    2017-02-01

    Context. A subclass of broad extreme ultraviolet (EUV) and X-ray jets, called blowout jets, have become a topic of research since they could be the link between standard collimated jets and coronal mass ejections (CMEs). Aims: Our aim is to understand the origin of a series of broad jets, some of which are accompanied by flares and associated with narrow and jet-like CMEs. Methods: We analyze observations of a series of recurrent broad jets observed in AR 10484 on 21-24 October 2003. In particular, one of them occurred simultaneously with an M2.4 flare on 23 October at 02:41 UT (SOLA2003-10-23). Both events were observed by the ARIES Hα Solar Tower-Telescope, TRACE, SOHO, and RHESSI instruments. The flare was very impulsive and followed by a narrow CME. A local force-free model of AR 10484 is the basis to compute its topology. We find bald patches (BPs) at the flare site. This BP topology is present for at least two days before to events. Large-scale field lines, associated with the BPs, represent open loops. This is confirmed by a global potential free source surface (PFSS) model. Following the brightest leading edge of the Hα and EUV jet emission, we can temporarily associate these emissions with a narrow CME. Results: Considering their characteristics, the observed broad jets appear to be of the blowout class. As the most plausible scenario, we propose that magnetic reconnection could occur at the BP separatrices forced by the destabilization of a continuously reformed flux rope underlying them. The reconnection process could bring the cool flux-rope material into the reconnected open field lines driving the series of recurrent blowout jets and accompanying CMEs. Conclusions: Based on a model of the coronal field, we compute the AR 10484 topology at the location where flaring and blowout jets occurred from 21 to 24 October 2003. This topology can consistently explain the origin of these events. The movie associated to Fig. 1 is available at http://www.aanda.org

  19. A Slow Streamer Blowout at the Sun and Ulysses

    Science.gov (United States)

    Seuss, S. T.; Bemporad, A.; Poletto, G.

    2004-01-01

    On 10 June 2000 a streamer on the southeast limb slowly disappeared from LASCO/C2 over approximately 10 hours. A small CME was reported in C2. A substantial interplanetary CME (ICME) was later detected at Ulysses, which was at quadrature with the Sun and SOHO at the time. This detection illustrates the properties of an ICME for a known solar source and demonstrates that the identification can be done even beyond 3 AU. Slow streamer blowouts such as this have long been known but are little studied. We report on the SOHO observation of a coronal mass ejection (CME) on the solar limb and the subsequent in situ detection at Ulysses, which was near quadrature at the time, above the location of the CME. SOHO-Ulysses quadrature was 13 June, when Ulysses was 3.36 AU from the Sun and 58.2 degrees south of the equator off the east limb. The slow streamer blowout was on 10 June, when the SOHO-Sun-Ulysses angle was 87 degrees.

  20. Flamingwagon: Athey Wagon braves heat to aid oilpatch blowout response

    Energy Technology Data Exchange (ETDEWEB)

    Leschart, M.

    2002-09-01

    Construction of an Athey wagon by Key Safety Ltd at the company's Red Deer, Alberta facilities is announced. At present, the wagon is awaiting only the installation of its fire-prevention system to be ready for action. The last such blowout response equipment was built in Alberta 40 years ago, and when Crestar Energy lost control of a horizontal sour well during a recent drilling in the Little Bow area, a unit built in 1954 and housed in a museum at the Leduc No. 1 historic site in Devon, Alberta, had to be pressed into service to deal with the emergency. While Athey wagons are not always essential to the blow-out control process, the addition of this new piece of well control safety equipment is welcome news, especially in the light of the new knowledge gained, and innovative processes and procedures developed by Canadian companies fighting oil field fires in Kuwait after the Gulf War.

  1. Estimating diversification rates for higher taxa: BAMM can give problematic estimates of rates and rate shifts.

    Science.gov (United States)

    Meyer, Andreas L S; Wiens, John J

    2018-01-01

    Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  2. Analysis of blowout fractures using cine mode MRI

    International Nuclear Information System (INIS)

    Kawahara, Masaaki; Shiihara, Kumiko; Kimura, Hisashi; Fukai, Sakuko; Tabuchi, Akio; Kojo, Tuyoshi

    1995-01-01

    By observing conventional CT and MRI images, it is difficult to distinguish extension failure from adhesion, bone fracture or damage to the extraocular muscle, any one of which may be the direct cause of the eye movement disturbance accompanying blowout fracture. We therefore carried out dynamic analysis of eye movement disturbance using a cine mode MRI. We put seven fixation points in the gantry of the MRI and filmed eye movement disturbances by the gradient echo method, using a surface coil and holding the vision on each fixation point. We also video recorded the CRT monitor of the MRI to obtain dynamic MRI images. The subjects comprised 5 cases (7-23 years old). In 4 cases, we started orthoptic treatment, saccadic eye movement training, convergence training and fusional amplitude training after surgery, with only orthoptic treatment in the 5 th case. In all cases, fusion area improvement was recognized during training. In 2 cases examined by cine mode MRI before and after surgery, we observed improved eye movement after training, the effectiveness of which was thereby proven. Also, using cine mode MRI we were able to determine the character of incarcerated tissue and the cause of eye movement disturbance. We conclude that it blowout fracture, cine mode MRI may be useful in selecting treatment and observing its effectiveness. (author)

  3. The use of time-series LIDAR to understand the role of foredune blowouts in coastal dune dynamics, Sefton, NW England.

    Science.gov (United States)

    O'Keeffe, Nicholas; Delgado-Fernandez, Irene; Aplin, Paul; Jackson, Derek; Marston, Christopher

    2017-04-01

    Coastal dunes are natural buffers against the threat of climate change-induced sea level rise. Their evolution is largely controlled by sediment exchanges between the geomorphic sub-units of the nearshore, beach, foredune and dune field. Coastlines characterised by multiple blowouts at the beach-dune interface may be more susceptible to coastline retreat through the enhanced landwards transport of beach and foredune sediment. This study, based in Sefton, north-west England, exploits an unprecedented temporal coverage of LIDAR surveys spanning 15 years (1999, 2008, 2010, 2013 and 2014). Established GIS techniques have been utilised to extract both the coastline (foredune toe) and the foredune crest from each LIDAR derived DTM (Digital Terrain Model). Migration of the foredune toe has been tracked over this period. Analysis of differentials between the height of the dune toe and dune crest have been used to locate the alongshore position of blowouts within the foredune. Dune sediment budgets have then been calculated for each DTM and analysis of the budgets conducted, with the coastline being compartmentalised alongshore, based on presence of blowouts within the foredune. Results indicate that sections of the coastline where blowouts are present within the foredune may be most vulnerable to coastline retreat. Temporal changes in the sediment budget within many of these sections also provides evidence that, if blowouts are present, coastline retreat continues to be a possibility even when the dune sediment budget remains positive.

  4. Blow-out limits of nonpremixed turbulent jet flames in a cross flow at atmospheric and sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang

    2015-07-22

    The blow-out limits of nonpremixed turbulent jet flames in cross flows were studied, especially concerning the effect of ambient pressure, by conducting experiments at atmospheric and sub-atmospheric pressures. The combined effects of air flow and pressure were investigated by a series of experiments conducted in an especially built wind tunnel in Lhasa, a city on the Tibetan plateau where the altitude is 3650 m and the atmospheric pressure condition is naturally low (64 kPa). These results were compared with results obtained from a wind tunnel at standard atmospheric pressure (100 kPa) in Hefei city (altitude 50 m). The size of the fuel nozzles used in the experiments ranged from 3 to 8 mm in diameter and propane was used as the fuel. It was found that the blow-out limit of the air speed of the cross flow first increased (“cross flow dominant” regime) and then decreased (“fuel jet dominant” regime) as the fuel jet velocity increased in both pressures; however, the blow-out limit of the air speed of the cross flow was much lower at sub-atmospheric pressure than that at standard atmospheric pressure whereas the domain of the blow-out limit curve (in a plot of the air speed of the cross flow versus the fuel jet velocity) shrank as the pressure decreased. A theoretical model was developed to characterize the blow-out limit of nonpremixed jet flames in a cross flow based on a Damköhler number, defined as the ratio between the mixing time and the characteristic reaction time. A satisfactory correlation was obtained at relative strong cross flow conditions (“cross flow dominant” regime) that included the effects of the air speed of the cross flow, fuel jet velocity, nozzle diameter and pressure.

  5. Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements

    Directory of Open Access Journals (Sweden)

    Pedro Mello Paiva

    2016-12-01

    Full Text Available This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers simplification of the spill on the surface, even in the well blowout scenario. Efforts to better understand the oil and gas behavior in the water column and three-dimensional modeling of the trajectory gained strength after the Deepwater Horizon spill in 2010 in the Gulf of Mexico. The data collected and the observations made during the accident were widely used for adjustment of the models, incorporating various factors related to hydrodynamic forcing and weathering processes to which the hydrocarbons are subjected during subsurface leaks. The difficulties show to be even more challenging in the case of blowouts in deep waters, where the uncertainties are still larger. The studies addressed different variables to make adjustments of oil and gas dispersion models along the upward trajectory. Factors that exert strong influences include: speed of the subsurface currents;  gas separation from the main plume; hydrate formation, dissolution of oil and gas droplets; variations in droplet diameter; intrusion of the droplets at intermediate depths; biodegradation; and appropriate parametrization of the density, salinity and temperature profiles of water through the column.

  6. Tracking the Hercules 265 marine gas well blowout in the Gulf of Mexico

    Science.gov (United States)

    Romero, Isabel C.; Özgökmen, Tamay; Snyder, Susan; Schwing, Patrick; O'Malley, Bryan J.; Beron-Vera, Francisco J.; Olascoaga, Maria J.; Zhu, Ping; Ryan, Edward; Chen, Shuyi S.; Wetzel, Dana L.; Hollander, David; Murawski, Steven A.

    2016-01-01

    On 23 July 2013, a marine gas rig (Hercules 265) ignited in the northern Gulf of Mexico. The rig burned out of control for 2 days before being extinguished. We conducted a rapid-response sampling campaign near Hercules 265 after the fire to ascertain if sediments and fishes were polluted above earlier baseline levels. A surface drifter study confirmed that surface ocean water flowed to the southeast of the Hercules site, while the atmospheric plume generated by the blowout was in eastward direction. Sediment cores were collected to the SE of the rig at a distance of ˜0.2, 8, and 18 km using a multicorer, and demersal fishes were collected from ˜0.2 to 8 km SE of the rig using a longline (508 hooks). Recently deposited sediments document that only high molecular weight (HMW) polycyclic aromatic hydrocarbon (PAH) concentrations decreased with increasing distance from the rig suggesting higher pyrogenic inputs associated with the blowout. A similar trend was observed in the foraminifera Haynesina germanica, an indicator species of pollution. In red snapper bile, only HMW PAH metabolites increased in 2013 nearly double those from 2012. Both surface sediments and fish bile analyses suggest that, in the aftermath of the blowout, increased concentration of pyrogenically derived hydrocarbons was transported and deposited in the environment. This study further emphasizes the need for an ocean observing system and coordinated rapid-response efforts from an array of scientific disciplines to effectively assess environmental impacts resulting from accidental releases of oil contaminants.

  7. Self expandable polytetrafluoroethylene stent for carotid blowout syndrome.

    Science.gov (United States)

    Tatar, E C; Yildirim, U M; Dündar, Y; Ozdek, A; Işik, E; Korkmaz, H

    2012-01-01

    Carotid blowout syndrome (CBS) is an emergency complication in patients undergoing treatment for head and neck cancers. The classical management of CBS is the ligation of the common carotid artery, because suturing is not be possible due to infection and necrosis of the field. In this case report, we present a patient with CBS, in whom we applied a self-expandable polytetrafluoroethylene (PTFE) stent and observed no morbidity. Endovascular stent is a life-saving technique with minimum morbidity that preserves blood flow to the brain. We believe that this method is preferable to ligation of the artery in CBS.

  8. Findings of a retrospective survey conducted after the Lodgepole sour gas well blowout to determine if the natural occurrence of bovine abortions and fetal anomalies increased

    International Nuclear Information System (INIS)

    Klavano, G.G.; Christian, R.G.

    1992-01-01

    A survey was conducted after the Lodgepole sour gas well blowout of October 1982 to determine if the incident changed the number and type of bovine abortions and abnormal bovine feti submitted to the diagnostic laboratory from the blowout area. The records of the total number of bovine feti submitted were compared between three areas to determine if there was a significant difference between the areas closer to the well site and the larger total area. No changes or trends could be ascribed to the well blowout. 2 refs., 5 tabs

  9. [Medpor plus titanic mesh implant in the repair of orbital blowout fractures].

    Science.gov (United States)

    Han, Xiao-hui; Zhang, Jia-yu; Cai, Jian-qiu; Shi, Ming-guang

    2011-05-10

    To study the efficacy of porous polyethylene (Medpor) plus titanic mesh sheets in the repair of orbital blowout fractures. A total of 20 patients underwent open surgical reduction with the combined usage of Medpor and titanic mesh. And they were followed up for average period of 14.5 months (range: 9 - 18). There is no infection or extrusion of medpor and titanic mesh in follow-up periods. There was no instance of decreased visual acuity at post-operation. And all cases of enophthalmos were corrected. The post-operative protrusion degree of both eyes was almost identical at less than 2 mm. The movement of eye balls was satisfactory in all directions. Diplopia disappeared in 18 cases with a cure rate of 90%, 1 case improved and 1 case persisted. Medpor plus titanic mesh implant is a safe and effective treatment in the repair of orbital blow out fractures.

  10. Clinical features and MRI findings of blow-out fracture

    Energy Technology Data Exchange (ETDEWEB)

    Yamanouchi, Yasuo; Yasuda, Takasumi; Kawamoto, Keiji [Kansai Medical Univ., Moriguchi, Osaka (Japan); Inagaki, Takayuki; Someda, Kuniyuki

    1996-06-01

    Precise anatomical understanding of orbital blow-out fracture lesions is necessary for the treatment of patients. Retrospectively, MRI findings were compared with the clinical features of pure type blow-out fractures and the efficacy of MRI in influencing a decision for surgical intervention was evaluated. Eighteen child (15 boys, 3 girls) cases were evaluated and compared with adult cases. The patients were classified into three categories (Fig.1) and two types (Fig.2) in accordance with the degree of protrusion of fat tissue. The degree of muscle protrusion also was divided into three categories (Fig. 3). Both muscle and fat tissue were protruding from the fracture site in 14 cases. Fat tissue protrusion alone was found in 3 cases. In contrast, no protrusion was seen in one case. The incarcerated type of fat prolapse was found in 40% of cases, while muscle tissue prolapse was found in 75% of patients. Marginal irregularity or swelling of muscle was observed in 11 patients. There was good correlation of ocular motor disturbance and MRI findings. Disturbance of eyeball movement was observed in all patients with either incarcerated fat tissue or marginal irregularity or swelling of muscle. In contrast, restriction of eyeball movement was rare in cases of no incarceration, even if the fracture was wide. Deformity or marginal irregularity of the ocular muscle demonstrated in MRI may suggest damage an adhesion to the muscle wall. When MRI reveals incarceration or severe prolapse of fat tissue, or deformity and marginal irregularity of the ocular muscle, surgical intervention should be considered. (author)

  11. Clinical features and MRI findings of blow-out fracture

    International Nuclear Information System (INIS)

    Yamanouchi, Yasuo; Yasuda, Takasumi; Kawamoto, Keiji; Inagaki, Takayuki; Someda, Kuniyuki.

    1996-01-01

    Precise anatomical understanding of orbital blow-out fracture lesions is necessary for the treatment of patients. Retrospectively, MRI findings were compared with the clinical features of pure type blow-out fractures and the efficacy of MRI in influencing a decision for surgical intervention was evaluated. Eighteen child (15 boys, 3 girls) cases were evaluated and compared with adult cases. The patients were classified into three categories (Fig.1) and two types (Fig.2) in accordance with the degree of protrusion of fat tissue. The degree of muscle protrusion also was divided into three categories (Fig. 3). Both muscle and fat tissue were protruding from the fracture site in 14 cases. Fat tissue protrusion alone was found in 3 cases. In contrast, no protrusion was seen in one case. The incarcerated type of fat prolapse was found in 40% of cases, while muscle tissue prolapse was found in 75% of patients. Marginal irregularity or swelling of muscle was observed in 11 patients. There was good correlation of ocular motor disturbance and MRI findings. Disturbance of eyeball movement was observed in all patients with either incarcerated fat tissue or marginal irregularity or swelling of muscle. In contrast, restriction of eyeball movement was rare in cases of no incarceration, even if the fracture was wide. Deformity or marginal irregularity of the ocular muscle demonstrated in MRI may suggest damage an adhesion to the muscle wall. When MRI reveals incarceration or severe prolapse of fat tissue, or deformity and marginal irregularity of the ocular muscle, surgical intervention should be considered. (author)

  12. Blowout Surge due to Interaction between a Solar Filament and Coronal Loops

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haidong; Jiang, Yunchun; Yang, Jiayan; Yang, Bo; Xu, Zhe; Bi, Yi; Hong, Junchao; Chen, Hechao [Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming, 650216 (China); Qu, Zhining, E-mail: lhd@ynao.ac.cn [Department of Physics, School of Science, Sichuan University of Science and Engineering, Zigong 643000 (China)

    2017-06-20

    We present an observation of the interaction between a filament and the outer spine-like loops that produces a blowout surge within one footpoint of large-scale coronal loops on 2015 February 6. Based the observation of the AIA 304 and 94 Å, the activated filament is initially embedded below a dome of a fan-spine configuration. Due to the ascending motion, the erupting filament reconnects with the outer spine-like field. We note that the material in the filament blows out along the outer spine-like field to form the surge with a wider spire, and a two-ribbon flare appears at the site of the filament eruption. In this process, small bright blobs appear at the interaction region and stream up along the outer spine-like field and down along the eastern fan-like field. As a result, a leg of the filament becomes radial and the material in it erupts, while another leg forms the new closed loops. Our results confirm that the successive reconnection occurring between the erupting filament and the coronal loops may lead to a strong thermal/magnetic pressure imbalance, resulting in a blowout surge.

  13. Field investigation finding of long term effects in Alberta livestock exposed to acid forming emissions: Survey following the Lodgepole blowout

    International Nuclear Information System (INIS)

    Harris, B.

    1992-01-01

    The effects of the Amoco Dome Brazeau River (Lodgepole) sour gas well blowout of October 1982 on livestock are reviewed. For 26 days the well was not on fire, with condensate falling on the area immediately surrounding the well site, while during the 41 days the well was burning, sour gas and condensate underwent combustion in the intense fire. Many local residents noted problems with their farm animals during the blowout period. A study was undertaken to evaluate opinions and attitudes of livestock producers about the long-term effects of the Lodgepole blowout. Contact was made with producers in the area of concern, and a worksheet was developed to aid in the collection of data. Information gathered was based on producer records, auction sale tickets, sales receipts and memory. Twenty livestock producers were interviewed, representing 1,700 head of beef cows, 40 dairy cows and 21 sows. Concerns expressed by producers included birth weight, birth defects or stillbirth, scours and problem calves, poor growth, open or dry cows, abortions, late calves, poor growth in replacement heifers, abnormal hair, and abnormal breeding. 1 tab

  14. Fatal carotid blowout syndrome after BNCT for head and neck cancers

    International Nuclear Information System (INIS)

    Aihara, T.; Hiratsuka, J.; Ishikawa, H.; Kumada, H.; Ohnishi, K.; Kamitani, N.; Suzuki, M.; Sakurai, H.; Harada, T.

    2015-01-01

    Boron neutron capture therapy (BNCT) is high linear energy transfer (LET) radiation and tumor-selective radiation that does not cause serious damage to the surrounding normal tissues. BNCT might be effective and safe in patients with inoperable, locally advanced head and neck cancers, even those that recur at previously irradiated sites. However, carotid blowout syndrome (CBS) is a lethal complication resulting from malignant invasion of the carotid artery (CA); thus, the risk of CBS should be carefully assessed in patients with risk factors for CBS after BNCT. Thirty-three patients in our institution who underwent BNCT were analyzed. Two patients developed CBS and experienced widespread skin invasion and recurrence close to the carotid artery after irradiation. Careful attention should be paid to the occurrence of CBS if the tumor is located adjacent to the carotid artery. The presence of skin invasion from recurrent lesions after irradiation is an ominous sign of CBS onset and lethal consequences. - Highlights: • This study is fatal carotid blowout syndrome after BNCT for head and neck cancers. • Thirty-three patients in our institution who underwent BNCT were analyzed. • Two patients (2/33) developed CBS. • The presence of skin invasion from recurrent lesions after irradiation is an ominous sign of CBS. • We must be aware of these signs to perform BNCT safely.

  15. Was the extreme and wide-spread marine oil-snow sedimentation and flocculent accumulation (MOSSFA) event during the Deepwater Horizon blow-out unique?

    Science.gov (United States)

    Vonk, Sophie M; Hollander, David J; Murk, AlberTinka J

    2015-11-15

    During the Deepwater Horizon blowout, thick layers of oiled material were deposited on the deep seafloor. This large scale benthic concentration of oil is suggested to have occurred via the process of Marine Oil Snow Sedimentation and Flocculent Accumulation (MOSSFA). This meta-analysis investigates whether MOSSFA occurred in other large oil spills and identifies the main drivers of oil sedimentation. MOSSFA was found to have occurred during the IXTOC I blowout and possibly during the Santa Barbara blowout. Unfortunately, benthic effects were not sufficiently studied for the 52 spills we reviewed. However, based on the current understanding of drivers involved, we conclude that MOSSFA and related benthic contamination may be widespread. We suggest to collect and analyze sediment cores at specific spill locations, as improved understanding of the MOSSFA process will allow better informed spill responses in the future, taking into account possible massive oil sedimentation and smothering of (deep) benthic ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A Cross-Wavelet Transform Aided Rule Based Approach for Early Prediction of Lean Blow-out in Swirl-Stabilized Dump Combustor

    Directory of Open Access Journals (Sweden)

    Debangshu Dey

    2015-03-01

    Full Text Available Lean or ultralean combustion is one of the popular strategies to achieve very low emission levels. However, it is extremely susceptible to lean blow-out (LBO. The present work explores a Cross-wavelet transform (XWT aided rule based scheme for early prediction of lean blowout. XWT can be considered as an advancement of wavelet analysis which gives correlation between two waveforms in time-frequency space. In the present scheme a swirl-stabilized dump combustor is used as a laboratory-scale model of a generic gas turbine combustor with LPG as fuel. Various time series data of CH chemiluminescence signal are recorded for different flame conditions by varying equivalence ratio, flow rate and level of air-fuel premixing. Some features are extracted from the cross-wavelet spectrum of the recorded waveforms and a reference wave. The extracted features are observed to classify the flame condition into three major classes: near LBO, moderate and healthy. Moreover, a Rough Set based technique is also applied on the extracted features to generate a rule base so that it can be fed to a real time controller or expert system to take necessary control action to prevent LBO. Results show that the proposed methodology performs with an acceptable degree of accuracy.

  17. DeepBlow - a Lagrangian plume model for deep water blowouts

    International Nuclear Information System (INIS)

    Johansen, Oeistein

    2000-01-01

    This paper presents a sub-sea blowout model designed with special emphasis on deep-water conditions. The model is an integral plume model based on a Lagrangian concept. This concept is applied to multiphase discharges in the formation of water, oil and gas in a stratified water column with variable currents. The gas may be converted to hydrate in combination with seawater, dissolved into the plume water, or leaking out of the plume due to the slip between rising gas bubbles and the plume trajectory. Non-ideal behaviour of the gas is accounted for by the introduction of pressure- and temperature-dependent compressibility z-factor in the equation of state. A number of case studies are presented in the paper. One of the cases (blowout from 100 m depth) is compared with observations from a field experiment conducted in Norwegian waters in June 1996. The model results are found to compare favourably with the field observations when dissolution of gas into seawater is accounted in the model. For discharges at intermediate to shallow depths (100-250 m), the two major processes limiting plume rise will be: (a) dissolution of gas into ambient water, or (b) bubbles rising out of the inclined plume. These processes tend to be self-enforcing, i.e., when a gas is lost by either of these processes, plume rise tends to slow down and more time will be available for dissolution. For discharges in deep waters (700-1500 m depth), hydrate formation is found to be a dominating process in limiting plume rise. (Author)

  18. MINIFILAMENT ERUPTION AS THE SOURCE OF A BLOWOUT JET, C-CLASS FLARE, AND TYPE-III RADIO BURST

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Junchao; Jiang, Yunchun; Yang, Jiayan; Li, Haidong; Xu, Zhe, E-mail: hjcsolar@ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming, 650216 (China); Center for Astronomical Mega-Science, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing, 100012 (China)

    2017-01-20

    We report a strong minifilament eruption associated with Geostationary Operational Environmental Satellite C1.6 flare and WIND type-III radio burst. The minifilament, which lies at the periphery of active region 12259, is detected by H α images from the New Vacuum Solar Telescope. The minifilament undergoes a partial and then a full eruption. Simultaneously, two co-spatial jets are successively observed in extreme ultraviolet images from the Solar Dynamic Observatory . The first jet exhibits a typical fan-spine geometry, suggesting that the co-spatial minifilament is possibly embedded in magnetic fields with a fan-spine structure. However, the second jet displays blowout morphology when the entire minifilament erupts upward, leaving behind a hard X-ray emission source in the base. Differential emission measure analyses show that the eruptive region is heated up to about 4 MK during the fan-spine jet, while up to about 7 MK during the blowout jet. In particular, the blowout jet is accompanied by an interplanetary type-III radio burst observed by WIND /WAVES in the frequency range from above 10 to 0.1 MHz. Hence, the minifilament eruption is correlated with the interplanetary type-III radio burst for the first time. These results not only suggest that coronal jets can result from magnetic reconnection initiated by erupting minifilaments with open fields, but also shed light on the potential influence of minifilament eruption on interplanetary space.

  19. Comparison of Absorbable Mesh Plate versus Titanium-Dynamic Mesh Plate in Reconstruction of Blow-Out Fracture: An Analysis of Long-Term Outcomes

    Directory of Open Access Journals (Sweden)

    Woon Il Baek

    2014-07-01

    Full Text Available Background A blow-out fracture is one of the most common facial injuries in midface trauma. Orbital wall reconstruction is extremely important because it can cause various functional and aesthetic sequelae. Although many materials are available, there are no uniformly accepted guidelines regarding material selection for orbital wall reconstruction. Methods From January 2007 to August 2012, a total of 78 patients with blow-out fractures were analyzed. 36 patients received absorbable mesh plates, and 42 patients received titanium-dynamic mesh plates. Both groups were retrospectively evaluated for therapeutic efficacy and safety according to the incidence of three different complications: enophthalmos, extraocular movement impairment, and diplopia. Results For all groups (inferior wall fracture group, medial wall fractrue group, and combined inferomedial wall fracture group, there were improvements in the incidence of each complication regardless of implant types. Moreover, a significant improvement of enophthalmos occurred for both types of implants in group 1 (inferior wall fracture group. However, we found no statistically significant differences of efficacy or complication rate in every groups between both implant types. Conclusions Both types of implants showed good results without significant differences in long-term follow up, even though we expected the higher recurrent enophthalmos rate in patients with absorbable plate. In conclusion, both types seem to be equally effective and safe for orbital wall reconstruction. In particular, both implant types significantly improve the incidence of enophthalmos in cases of inferior orbital wall fractures.

  20. Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements

    OpenAIRE

    Pedro Mello Paiva; Alexandre Nunes Barreto; Jader Lugon Junior; Leticia Ferraço de Campos

    2016-01-01

    This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers sim...

  1. Blow-out limits of nonpremixed turbulent jet flames in a cross flow at atmospheric and sub-atmospheric pressures

    KAUST Repository

    Wang, Qiang; Hu, Longhua; Yoon, Sung Hwan; Lu, Shouxiang; Delichatsios, Michael; Chung, Suk-Ho

    2015-01-01

    The blow-out limits of nonpremixed turbulent jet flames in cross flows were studied, especially concerning the effect of ambient pressure, by conducting experiments at atmospheric and sub-atmospheric pressures. The combined effects of air flow

  2. Kinetic magnetic resonance imaging of orbital blowout fracture with restricted ocular movement

    International Nuclear Information System (INIS)

    Totsuka, Nobuyoshi; Koide, Ryouhei; Inatomi, Makoto; Fukado, Yoshinao; Hisamatsu, Katsuji.

    1992-01-01

    We analyzed the mechanism of gaze limitation in blowout fracture in 19 patients by means of kinetic magnetic resonance imaging (MRI). We could identify herniation of fat tissue and rectus muscles with connective tissue septa in 11 eyes. Depressed rectus muscles were surrounded by fat tissue. In no instance was the rectus muscle actually incarcerated. Entrapped connective tissue septa seemed to prevent movement of affected rectus muscle. We occasionally observed incarcerated connective tissue septa to restrict motility of the optic nerve. (author)

  3. The Measurement of the Sensory Recovery Period in Zygoma and Blow-Out Fractures with Neurometer Current Perception Threshold.

    Science.gov (United States)

    Oh, Daemyung; Yun, Taebin; Kim, Junhyung; Choi, Jaehoon; Jeong, Woonhyeok; Chu, Hojun; Lee, Soyoung

    2016-09-01

    Facial hypoesthesia is one of the most troublesome complaints in the management of facial bone fractures. However, there is a lack of literature on facial sensory recovery after facial trauma. The purpose of this study was to evaluate the facial sensory recovery period for facial bone fractures using Neurometer. Sixty-three patients who underwent open reduction of zygomatic and blowout fractures between December 2013 and July 2015 were included in the study. The facial sensory status of the patients was repeatedly examined preoperatively and postoperatively by Neurometer current perception threshold (CPT) until the results were normalized. Among the 63 subjects, 30 patients had normal Neurometer results preoperatively and postoperatively. According to fracture types, 17 patients with blowout fracture had a median recovery period of 0.25 months. Twelve patients with zygomatic fracture had a median recovery period of 1.00 month. Four patients with both fracture types had a median recovery period of 0.625 months. The median recovery period of all 33 patients was 0.25 months. There was no statistically significant difference in the sensory recovery period between types and subgroups of zygomatic and blowout fractures. In addition, there was no statistically significant difference in the sensory recovery period according to Neurometer results and the patients' own subjective reports. Neurometer CPT is effective for evaluating and comparing preoperative and postoperative facial sensory status and evaluating the sensory recovery period in facial bone fracture patients.

  4. A novel integrated approach for path following and directional stability control of road vehicles after a tire blow-out

    Science.gov (United States)

    Wang, Fei; Chen, Hong; Guo, Konghui; Cao, Dongpu

    2017-09-01

    The path following and directional stability are two crucial problems when a road vehicle experiences a tire blow-out or sudden tire failure. Considering the requirement of rapid road vehicle motion control during a tire blow-out, this article proposes a novel linearized decoupling control procedure with three design steps for a class of second order multi-input-multi-output non-affine system. The evaluating indicators for controller performance are presented and a performance related control parameter distribution map is obtained based on the stochastic algorithm which is an innovation for non-blind parameter adjustment in engineering implementation. The analysis on the robustness of the proposed integrated controller is also performed. The simulation studies for a range of driving conditions are conducted, to demonstrate the effectiveness of the proposed controller.

  5. Breaking primary seed dormancy in Gibbens' beardtongue (Penstemon gibbensii) and blowout penstemon (Penstemon haydenii)

    Science.gov (United States)

    Kassie L. Tilini; Susan E. Meyer; Phil S. Allen

    2016-01-01

    This study established that chilling removes primary seed dormancy in 2 rare penstemons of the western US, Gibbens’ beardtongue (Penstemon gibbensii Dorn [Scrophulariaceae]) and blowout penstemon (Penstemon haydenii S. Watson). Wild-harvested seeds were subjected either to moist chilling at 2 to 4 °C (36-39 °F) for 0, 4, 8, 12, and 16 wk or to approximately 2 y of dry...

  6. Longitudinal phase space characterization of the blow-out regime of rf photoinjector operation

    OpenAIRE

    J. T. Moody; P. Musumeci; M. S. Gutierrez; J. B. Rosenzweig; C. M. Scoby

    2009-01-01

    Using an experimental scheme based on a vertically deflecting rf deflector and a horizontally dispersing dipole, we characterize the longitudinal phase space of the beam in the blow-out regime at the UCLA Pegasus rf photoinjector. Because of the achievement of unprecedented resolution both in time (50 fs) and energy (1.0 keV), we are able to demonstrate some important properties of the beams created in this regime such as extremely low longitudinal emittance, large temporal energy chirp, and ...

  7. A blowout numerical model for the supernova remnant G352.7-0.1

    OpenAIRE

    Toledo Roy, J. C.; Velazquez, P. F.; Esquivel, A.; Giacani, Elsa Beatriz

    2017-01-01

    We present 3D hydrodynamical simulations of the Galactic supernova remnant G352.7−0.1. This remnant is peculiar for having a shell-like inner ring structure and an outer arc in radio observations. In our model, the supernova explosion producing the remnant occurs inside and near the border of a spherical cloud with a density higher than that of the surrounding interstellar medium. A blowout is produced when the remnant reaches the border of the cloud. We have then used the results of our hydr...

  8. Orbital Blowout Fracture with Complete Dislocation of the Globe into the Maxillary Sinus.

    Science.gov (United States)

    Wang, Joy Mh; Fries, Fabian N; Hendrix, Philipp; Brinker, Titus; Loukas, Marios; Tubbs, R Shane

    2017-09-29

    This rare case report describes the diagnosis and treatment of an isolated left-sided orbital floor fracture with a complete dislocation of the globe into the maxillary sinus and briefly discusses the indications of surgery and recovery for orbital floor fractures in general. Complete herniation of the globe through an orbital blow-out fracture is uncommon. However, the current case illustrates that such an occurrence should be in the differential diagnosis and should be considered, especially following high speed/impact injuries involving a foreign object. In these rare cases, surgical intervention is required.

  9. Estimating Discount Rates

    Directory of Open Access Journals (Sweden)

    Laurence Booth

    2015-04-01

    Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.

  10. Transarterial endovascular treatment in the management of life-threatening carotid blowout syndrome in head and neck cancer patients: review of the literature.

    Science.gov (United States)

    Dequanter, D; Shahla, M; Paulus, P; Aubert, C; Lothaire, P

    2013-12-01

    Carotid blowout syndrome is a rare but devastating complication in patients with head and neck malignancy, and is associated with high morbidity and mortality. Bleeding from the carotid artery or its branches is a well-recognized complication following treatment or recurrence of head and neck cancer. It is an emergency situation, and the classical approach to save the patient's life is to ligate the carotid artery. But the surgical treatment is often technically difficult. Endovascular therapies were recently reported as good alternatives to surgical ligation. Retrospective review of three cases of acute or threatened carotid hemorrhage managed by endovascular therapies. Two patients presented with acute carotid blowout, and one patient with a sentinel bleed. Two patients had previously been treated with surgery and chemo radiation. One patient was treated by chemo radiation. Two had developed pharyngocutaneous fistulas, and one had an open necrosis filled wound that surrounded the carotid artery. In two patients, stent placement resolved the acute hemorrhage. In one patient, superselective embolization was done. Mean duration follow-up was 10.2 months. No patient had residual sequelae of stenting or embolization. Management of carotid blow syndrome is very critical and difficult. A multidisciplinary approach is very important in the management of carotid blow syndrome. Correct and suitable management can be life saving. An endovascular technique is a good and effective alternative with much lower morbidity rates than surgical repair or ligation. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  11. An Innovative Oil Pollution Containment Method for Ship Wrecks Proposed for Offshore Well Blow-outs

    OpenAIRE

    ANDRITSOS Fivos; COJINS Hans

    2011-01-01

    In the aftermath of the PRESTIGE disaster, an innovative system for the prompt intervention on oil pollution sources (primarily ship wrecks) at great depths was conceived at the Joint Research Center of the European Commission. This system, with some re-engineering, could also serve for collecting oil and gas leaking after an offshore well blow-out and could constitute a reference method for prompt intervention on deep water oil pollution sources like ship wrecks and blown-out offshore wells....

  12. 19 CFR 159.38 - Rates for estimated duties.

    Science.gov (United States)

    2010-04-01

    ... TREASURY (CONTINUED) LIQUIDATION OF DUTIES Conversion of Foreign Currency § 159.38 Rates for estimated duties. For purposes of calculating estimated duties, the port director shall use the rate or rates... 19 Customs Duties 2 2010-04-01 2010-04-01 false Rates for estimated duties. 159.38 Section 159.38...

  13. Three-Axis Attitude Estimation Using Rate-Integrating Gyroscopes

    Science.gov (United States)

    Crassidis, John L.; Markley, F. Landis

    2016-01-01

    Traditionally, attitude estimation has been performed using a combination of external attitude sensors and internal three-axis gyroscopes. There are many studies of three-axis attitude estimation using gyros that read angular rates. Rate-integrating gyros measure integrated rates or angular displacements, but three-axis attitude estimation using these types of gyros has not been as fully investigated. This paper derives a Kalman filtering framework for attitude estimation using attitude sensors coupled with rate- integrating gyroscopes. In order to account for correlations introduced by using these gyros, the state vector must be augmented, compared with filters using traditional gyros that read angular rates. Two filters are derived in this paper. The first uses an augmented state-vector form that estimates attitude, gyro biases, and gyro angular displacements. The second ignores correlations, leading to a filter that estimates attitude and gyro biases only. Simulation comparisons are shown for both filters. The work presented in this paper focuses only on attitude estimation using rate-integrating gyros, but it can easily be extended to other applications such as inertial navigation, which estimates attitude and position.

  14. Was the extreme and widespread marine oil-snow sedimentation and flocculent accumulation (MOSSFA) event during the Deepwater Horizon blow-out unique?

    NARCIS (Netherlands)

    Vonk, S.M.; Hollander, D.J.; Murk, A.J.

    2015-01-01

    During the Deepwater Horizon blowout, thick layers of oiled material were deposited on the deep seafloor. This large scale benthic concentration of oil is suggested to have occurred via the process of Marine Oil Snow Sedimentation and Flocculent Accumulation (MOSSFA). This meta-analysis investigates

  15. Distal Marginal Stenosis: A Contributing Factor in Delayed Carotid Occlusion of a Patient With Carotid Blowout Syndrome Treated With Stent Grafts

    Directory of Open Access Journals (Sweden)

    Feng-Chi Chang

    2010-05-01

    Full Text Available Distal marginal stenosis is rarely reported to be a factor associated with poor long-term patency of patients of head and neck cancers with carotid blowout syndrome treated with stent grafts. We report a case of laryngeal cancer with rupture of the right common carotid artery. A self-expandable stent graft was deployed, but bleeding recurred. Another stent graft was deployed for the pseudoaneurysm located distal to the first stent graft. Rebleeding occurred because of pseudoaneurysm formation from reconstituted branches of the right superior thyroid artery. We performed direct percutaneous puncture of the proximal superior thyroid artery for successful embolization. Distal marginal stenosis and asymptomatic thrombosis of the carotid artery were noted at 3.5- and 5-month follow-ups, respectively. We suggest aggressive early follow-up and reintervention for distal marginal stenosis by combined antibiotic therapy and angioplasty and stenting to improve the long-term patency of stent-graft deployment for management of carotid blowout syndrome.

  16. Cladoceran birth and death rates estimates

    OpenAIRE

    Gabriel, Wilfried; Taylor, B. E.; Kirsch-Prokosch, Susanne

    1987-01-01

    I. Birth and death rates of natural cladoceran populations cannot be measured directly. Estimates of these population parameters must be calculated using methods that make assumptions about the form of population growth. These methods generally assume that the population has a stable age distribution. 2. To assess the effect of variable age distributions, we tested six egg ratio methods for estimating birth and death rates with data from thirty-seven laboratory populations of Daphnia puli...

  17. Short-range wakefields generated in the blowout regime of plasma-wakefield acceleration

    Science.gov (United States)

    Stupakov, G.

    2018-04-01

    In the past, calculation of wakefields generated by an electron bunch propagating in a plasma has been carried out in linear approximation, where the plasma perturbation can be assumed small and plasma equations of motion linearized. This approximation breaks down in the blowout regime where a high-density electron driver expels plasma electrons from its path and creates a cavity void of electrons in its wake. In this paper, we develop a technique that allows us to calculate short-range longitudinal and transverse wakes generated by a witness bunch being accelerated inside the cavity. Our results can be used for studies of the beam loading and the hosing instability of the witness bunch in plasma-wakefield and laser-wakefield acceleration.

  18. Use of the theory of recognition of patterns in developing methane metering equipment for blow-out-dangerous mines. [Instrument recognizes rate of change of methane concentration and, if dangerous, shuts off electrical equipment

    Energy Technology Data Exchange (ETDEWEB)

    Medvedev, V N

    1978-01-01

    In the most general form, the existing methane-metering equipment which issues command signals when the maximum permissible value of methane concentration has been reached can be viewed as a recognition system. The algorithm for operation on the principle of evaluating the degree of blow-out danger of the ore atmosphere stipulates the recognition of two situations: 1) ''not dangerous ''(methane concentration below maximum permissible value); 2) ''dangerous'' (disorders in technological process; methane concentration above maximum permissible value). This approach for constructing means for gas protection is optimal only for mines working beds which are not dangerous for sudden blow-outs. However, if we ''train'' the apparatus to recognize what was the reason for increase in methane concentration, ways are afforded for solving the problem of creating an effective methane-metering equipment for mines with sudden blow-outs. Gas-dynamic processes with sudden blow-outs can be distinguished from standard technological, in particular, according to the rate in increase in methane concentration. On this basis, functional plan is proposed for constructing the automatic gas protection for explosiondangerous mines which includes a primary measurement of methane concentration, block of concentration control, block of process recognition, block of command signals, block of information delay, block of measuring the rate of methane concentration, threshold device for the rate of increase in concentration.

  19. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  20. Phylogenetic estimates of diversification rate are affected by molecular rate variation.

    Science.gov (United States)

    Duchêne, D A; Hua, X; Bromham, L

    2017-10-01

    Molecular phylogenies are increasingly being used to investigate the patterns and mechanisms of macroevolution. In particular, node heights in a phylogeny can be used to detect changes in rates of diversification over time. Such analyses rest on the assumption that node heights in a phylogeny represent the timing of diversification events, which in turn rests on the assumption that evolutionary time can be accurately predicted from DNA sequence divergence. But there are many influences on the rate of molecular evolution, which might also influence node heights in molecular phylogenies, and thus affect estimates of diversification rate. In particular, a growing number of studies have revealed an association between the net diversification rate estimated from phylogenies and the rate of molecular evolution. Such an association might, by influencing the relative position of node heights, systematically bias estimates of diversification time. We simulated the evolution of DNA sequences under several scenarios where rates of diversification and molecular evolution vary through time, including models where diversification and molecular evolutionary rates are linked. We show that commonly used methods, including metric-based, likelihood and Bayesian approaches, can have a low power to identify changes in diversification rate when molecular substitution rates vary. Furthermore, the association between the rates of speciation and molecular evolution rate can cause the signature of a slowdown or speedup in speciation rates to be lost or misidentified. These results suggest that the multiple sources of variation in molecular evolutionary rates need to be considered when inferring macroevolutionary processes from phylogenies. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  1. Longitudinal phase space characterization of the blow-out regime of rf photoinjector operation

    Directory of Open Access Journals (Sweden)

    J. T. Moody

    2009-07-01

    Full Text Available Using an experimental scheme based on a vertically deflecting rf deflector and a horizontally dispersing dipole, we characterize the longitudinal phase space of the beam in the blow-out regime at the UCLA Pegasus rf photoinjector. Because of the achievement of unprecedented resolution both in time (50 fs and energy (1.0 keV, we are able to demonstrate some important properties of the beams created in this regime such as extremely low longitudinal emittance, large temporal energy chirp, and the degrading effects of the cathode image charge in the longitudinal phase space which eventually leads to poorer beam quality. All of these results have been found in good agreement with simulations.

  2. Combustor exhaust-emissions and blowout-limits with diesel number 2 and Jet A fuels utilizing air-atomizing and pressure-atomizing nozzles

    Science.gov (United States)

    Ingebo, R. D.; Norgren, C. T.

    1975-01-01

    The effect of fuel properties on exhaust emissions and blowout limits of a high-pressure combustor segment is evaluated using a splash-groove air-atomizing fuel injector and a pressure-atomizing simplex fuel nozzle to burn both diesel number 2 and Jet A fuels. Exhaust emissions and blowout data are obtained and compared on the basis of the aromatic content and volatility of the two fuels. Exhaust smoke number and emission indices for oxides of nitrogen, carbon monoxide, and unburned hydrocarbons are determined for comparison. As compared to the pressure-atomizing nozzle, the air-atomizing nozzle is found to reduce nitrogen oxides by 20%, smoke number by 30%, carbon monoxide by 70%, and unburned hydrocarbons by 50% when used with diesel number 2 fuel. The higher concentration of aromatics and lower volatility of diesel number 2 fuel as compared to Jet A fuel appears to have the most detrimental effect on exhaust emissions. Smoke number and unburned hydrocarbons are twice as high with diesel number 2 as with Jet A fuel.

  3. 30 CFR 250.213 - What general information must accompany the EP?

    Science.gov (United States)

    2010-07-01

    ... description and discussion of any new or unusual technology (see definition under § 250.200) you will use to... hydrocarbons. Include the estimated flow rate, total volume, and maximum duration of the potential blowout...

  4. Estimation of the rate of volcanism on Venus from reaction rate measurements

    Science.gov (United States)

    Fegley, Bruce, Jr.; Prinn, Ronald G.

    1989-01-01

    Laboratory rate data for the reaction between SO2 and calcite to form anhydrite are presented. If this reaction rate represents the SO2 reaction rate on Venus, then all SO2 in the Venusian atmosphere will disappear in 1.9 Myr unless volcanism replenishes the lost SO2. The required volcanism rate, which depends on the sulfur content of the erupted material, is in the range 0.4-11 cu km of magma erupted per year. The Venus surface composition at the Venera 13, 14, and Vega 2 landing sites implies a volcanism rate of about 1 cu km/yr. This geochemically estimated rate can be used to determine if either (or neither) of two discordant geophysically estimated rates is correct. It also suggests that Venus may be less volcanically active than the earth.

  5. Seed bank dynamics of blowout penstemon in relation to local patterns of sand movement on the Ferris Dunes, south-central Wyoming

    Science.gov (United States)

    Kassie L. Tilini; Susan E. Meyer; Phil S. Allen

    2017-01-01

    Plants restricted to active sand dunes possess traits that enable both survival in a harsh environment and local migration in response to a shifting habitat mosaic. We examined seed bank dynamics of Penstemon haydenii S. Watson (blowout penstemon) in relation to local sand movement. We measured within-year sand movement along a 400 m transect and examined plant density...

  6. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  7. Introduction to State Estimation of High-Rate System Dynamics.

    Science.gov (United States)

    Hong, Jonathan; Laflamme, Simon; Dodson, Jacob; Joyce, Bryan

    2018-01-13

    Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer's convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model.

  8. Estimating market probabilities of future interest rate changes

    OpenAIRE

    Hlušek, Martin

    2002-01-01

    The goal of this paper is to estimate the market consensus forecast of future monetary policy development and to quantify the priced-in probability of interest rate changes for different future time horizons. The proposed model uses the current spot money market yield curve and available money market derivative instruments (forward rate agreements, FRAs) and estimates the market probability of interest rate changes up to a 12-month horizon.

  9. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  10. Prolonged decay of molecular rate estimates for metazoan mitochondrial DNA

    Directory of Open Access Journals (Sweden)

    Martyna Molak

    2015-03-01

    Full Text Available Evolutionary timescales can be estimated from genetic data using the molecular clock, often calibrated by fossil or geological evidence. However, estimates of molecular rates in mitochondrial DNA appear to scale negatively with the age of the clock calibration. Although such a pattern has been observed in a limited range of data sets, it has not been studied on a large scale in metazoans. In addition, there is uncertainty over the temporal extent of the time-dependent pattern in rate estimates. Here we present a meta-analysis of 239 rate estimates from metazoans, representing a range of timescales and taxonomic groups. We found evidence of time-dependent rates in both coding and non-coding mitochondrial markers, in every group of animals that we studied. The negative relationship between the estimated rate and time persisted across a much wider range of calibration times than previously suggested. This indicates that, over long time frames, purifying selection gives way to mutational saturation as the main driver of time-dependent biases in rate estimates. The results of our study stress the importance of accounting for time-dependent biases in estimating mitochondrial rates regardless of the timescale over which they are inferred.

  11. Diagnosis of blow-out fracture of the orbital floor using a computed tomography

    International Nuclear Information System (INIS)

    Ishikawa, Yoshimi; Aoki, Shinjiroh; Ono, Shigeru; Fujita, Kiyohide

    1986-01-01

    Diagnosis of a blow-out fracture of the orbital floor is relatively easy from clinical and roentogen findings but information from these findings are not adequate for the decision for surgical indication. Until recently, computed tomography for the orbital region was generally performed only in axial plane. But the axial plane is parallel to the orbital structure, therefore it is difficult to visualize the orbital floor and inferior rectus muscle. On the other hand, coronal plane is cross-section to the orbital structure, and we can visualize the all orbital soft tissues including extraocular muscles. We examined 3 cases diagnosed as a blow-out fracture of the orbital floor by conventional roentogen films and clinical findings. After axial CT was performed firstly, the coronal CT was scanned with 60 degrees to 70 degrees from OM line, setting the patient in the hanging head position. Case 1 used coronal reformation of the axial CT data. Case 2 and case 3 used direct coronal scanning. By bi-plane CT we could diagnose whether inferior rectus muscle was entrapped or not, and confirm surgical indication. Using of a cadaver, we studied the mechanism for limitation of eye movement. These studies revealed that it is rare that the inferior rectus muscle is entrapped directly by fractured segments at the orbital anterior floor. Because there is a lot of orbital fat-pad between the inferior rectus muscle and the orbital floor, and it may play an important role. The limited mobility of the inferior rectus muscle may occur as a result of increased tension of the fibrous bands that attach to the muscle sheath from the prolapsed fat-pad, and contructure by scar formed secondarily. Diagnosis of the location of a fracture in the orbital floor and cause of the limitation of eye movement must be done as early as possible and from this information we can confirm surgical indication. (J.P.N.)

  12. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  13. Use of Hedonic Prices to Estimate Capitalization Rate

    OpenAIRE

    Gaetano Lisi

    2015-01-01

    In this paper, a model of income capitalization is developed where hedonic prices play a key role in estimating the going-in capitalization rate. Precisely, the hedonic functions for rental and selling prices are introduced into a basic model of income capitalization. From the modified model, it is possible to derive a direct relationship between hedonic prices and capitalization rate. An advantage of the proposed approach is that estimation of the capitalization rate can be made without cons...

  14. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  15. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  16. Ion response to relativistic electron bunches in the blowout regime of laser-plasma accelerators.

    Science.gov (United States)

    Popov, K I; Rozmus, W; Bychenkov, V Yu; Naseri, N; Capjack, C E; Brantov, A V

    2010-11-05

    The ion response to relativistic electron bunches in the so called bubble or blowout regime of a laser-plasma accelerator is discussed. In response to the strong fields of the accelerated electrons the ions form a central filament along the laser axis that can be compressed to densities 2 orders of magnitude higher than the initial particle density. A theory of the filament formation and a model of ion self-compression are proposed. It is also shown that in the case of a sharp rear plasma-vacuum interface the ions can be accelerated by a combination of three basic mechanisms. The long time ion evolution that results from the strong electrostatic fields of an electron bunch provides a unique diagnostic of laser-plasma accelerators.

  17. FIBRILLAR CHROMOSPHERIC SPICULE-LIKE COUNTERPARTS TO AN EXTREME-ULTRAVIOLET AND SOFT X-RAY BLOWOUT CORONAL JET

    International Nuclear Information System (INIS)

    Sterling, Alphonse C.; Moore, Ronald L.; Harra, Louise K.

    2010-01-01

    We observe an erupting jet feature in a solar polar coronal hole, using data from Hinode/Solar Optical Telescope (SOT), Extreme Ultraviolet Imaging Spectrometer (EIS), and X-Ray Telescope (XRT), with supplemental data from STEREO/EUVI. From extreme-ultraviolet (EUV) and soft X-ray (SXR) images we identify the erupting feature as a blowout coronal jet: in SXRs it is a jet with a bright base, and in EUV it appears as an eruption of relatively cool (∼50,000 K) material of horizontal size scale ∼30'' originating from the base of the SXR jet. In SOT Ca II H images, the most pronounced analog is a pair of thin (∼1'') ejections at the locations of either of the two legs of the erupting EUV jet. These Ca II features eventually rise beyond 45'', leaving the SOT field of view, and have an appearance similar to standard spicules except that they are much taller. They have velocities similar to that of 'type II' spicules, ∼100 km s -1 , and they appear to have spicule-like substructures splitting off from them with horizontal velocity ∼50 km s -1 , similar to the velocities of splitting spicules measured by Sterling et al. Motions of splitting features and of other substructures suggest that the macroscopic EUV jet is spinning or unwinding as it is ejected. This and earlier work suggest that a subpopulation of Ca II type II spicules are the Ca II manifestation of portions of larger scale erupting magnetic jets. A different subpopulation of type II spicules could be blowout jets occurring on a much smaller horizontal size scale than the event we observe here.

  18. Control of an innovative super-capacitor-powered shape-memory-alloy actuated accumulator for blowout preventer

    Science.gov (United States)

    Chen, Jian; Li, Peng; Song, Gangbing; Ren, Zhang

    2017-01-01

    The design of a super-capacitor-powered shape-memory-alloy (SMA) actuated accumulator for blowout preventer (BOP) presented in this paper featured several advantages over conventional hydraulic accumulators including instant large current drive, quick system response and elimination of need for the pressure conduits. However, the mechanical design introduced two challenges, the nonlinear nature of SMA actuators and the varying voltage provided by a super capacitor, for control system design. A cerebellar model articulation controller (CMAC) feedforward plus PID controller was developed with the aim of compensation for these adverse effects. Experiments were conducted on a scaled down model and experimental results show that precision control can be achieved with the proposed configurations and algorithms.

  19. Estimation of dose from chromosome aberration rate

    International Nuclear Information System (INIS)

    Li Deping

    1990-01-01

    The methods and skills of evaluating dose from correctly scored shromsome aberration rate are presented, and supplemented with corresponding BASIC computer code. The possibility and preventive measures of excessive probability of missing score of the aberrations in some of the current routine score methods are discussed. The use of dose-effect relationship with exposure time correction factor G in evaluating doses and their confidence intervals, dose estimation in mixed n-γ exposure, and identification of high by nonuniform acute exposure to low LET radiation and its dose estimation are discussed in more detail. The difference of estimated dose due to whether the interaction between subleisoms produced by n and γ have been taken into account is examined. In fitting the standard dose-aberration rate curve, proper weighing of experiment points and comparison with commonly accepted values are emphasised, and the coefficient of variation σ y √y of the aberration rate y as a function of dose and exposure time is given. In appendix I and II, the dose-aberration rate formula is derived from dual action theory, and the time variation of subleisom is illustrated and in appendix III, the estimation of dose from scores of two different types of aberrations (of other related score) is illustrated. Two computer codes are given in appendix IV, one is a simple code, the other a complete code, including the fitting of standard curve. the skills of using compressed data storage, and the production of simulated 'data ' for testing the curve fitting procedure are also given

  20. Epistemic uncertainties when estimating component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Mavko, B.; Kljenak, I.

    2000-01-01

    A method for specific estimation of a component failure rate, based on specific quantitative and qualitative data other than component failures, was developed and is described in the proposed paper. The basis of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas likelihood is built using fuzzy logic theory. With the proposed method, the component failure rate estimation is based on a much larger quantity of information compared to the presently used classical methods. Consequently, epistemic uncertainties, which are caused by lack of knowledge about a component or phenomenon are reduced. (author)

  1. Linking irreplaceable landforms in a self-organizing landscape to sensitivity of population vital rates for an ecological specialist.

    Science.gov (United States)

    Ryberg, Wade A; Hill, Michael T; Painter, Charles W; Fitzgerald, Lee A

    2015-06-01

    Irreplaceable, self-organizing landforms and the endemic and ecologically specialized biodiversity they support are threatened globally by anthropogenic disturbances. Although the outcome of disrupting landforms is somewhat understood, little information exists that documents population consequences of landform disturbance on endemic biodiversity. Conservation strategies for species dependent upon landforms have been difficult to devise because they require understanding complex feedbacks that create and maintain landforms and the consequences of landform configuration on demography of species. We characterized and quantified links between landform configuration and demography of an ecological specialist, the dunes sagebrush lizard (Sceloporus arenicolus), which occurs only in blowouts (i.e., wind-blown sandy depressions) of Shinnery oak (Quercus havardii) sand-dune landforms. We used matrix models to estimate vital rates from a multisite mark-recapture study of 6 populations occupying landforms with different spatial configurations. Sensitivity and elasticity analyses demonstrated demographic rates among populations varied in sensitivity to different landform configurations. Specifically, significant relationships between blowout shape complexity and vital rate elasticities suggested direct links between S. arenicolus demography and amount of edge in Shinnery oak sand-dune landforms. These landforms are irreplaceable, based on permanent transition of disturbed areas to alternative grassland ecosystem states. Additionally, complex feedbacks between wind, sand, and Shinnery oak maintain this landform, indicating restoration through land management practices is unlikely. Our findings that S. arenicolus population dynamics depended on landform configuration suggest that failure to consider processes of landform organization and their effects on species' population dynamics may lead to incorrect inferences about threats to endemic species and ineffective habitat

  2. Bayesian estimation of dose rate effectiveness

    International Nuclear Information System (INIS)

    Arnish, J.J.; Groer, P.G.

    2000-01-01

    A Bayesian statistical method was used to quantify the effectiveness of high dose rate 137 Cs gamma radiation at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice. The Bayesian approach considers both the temporal and dose dependence of radiation carcinogenesis and total mortality. This paper provides the first direct estimation of dose rate effectiveness using Bayesian statistics. This statistical approach provides a quantitative description of the uncertainty of the factor characterising the dose rate in terms of a probability density function. The results show that a fixed dose from 137 Cs gamma radiation delivered at a high dose rate is more effective at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice than the same dose delivered at a low dose rate. (author)

  3. Flame blowout and pollutant emissions in vitiated combustion of conventional and bio-derived fuels

    Science.gov (United States)

    Singh, Bhupinder

    The widening gap between the demand and supply of fossil fuels has catalyzed the exploration of alternative sources of energy. Interest in the power, water extraction and refrigeration (PoWER) cycle, proposed by the University of Florida, as well as the desirability of using biofuels in distributed generation systems, has motivated the exploration of biofuel vitiated combustion. The PoWER cycle is a novel engine cycle concept that utilizes vitiation of the air stream with externally-cooled recirculated exhaust gases at an intermediate pressure in a semi-closed cycle (SCC) loop, lowering the overall temperature of combustion. It has several advantages including fuel flexibility, reduced air flow, lower flame temperature, compactness, high efficiency at full and part load, and low emissions. Since the core engine air stream is vitiated with the externally cooled exhaust gas recirculation (EGR) stream, there is an inherent reduction in the combustion stability for a PoWER engine. The effect of EGR flow and temperature on combustion blowout stability and emissions during vitiated biofuel combustion has been characterized. The vitiated combustion performance of biofuels methyl butanoate, dimethyl ether, and ethanol have been compared with n-heptane, and varying compositions of syngas with methane fuel. In addition, at high levels of EGR a sharp reduction in the flame luminosity has been observed in our experimental tests, indicating the onset of flameless combustion. This drop in luminosity may be a result of inhibition of processes leading to the formation of radiative soot particles. One of the objectives of this study is finding the effect of EGR on soot formation, with the ultimate objective of being able to predict the boundaries of flameless combustion. Detailed chemical kinetic simulations were performed using a constant-pressure continuously stirred tank reactor (CSTR) network model developed using the Cantera combustion code, implemented in C++. Results have

  4. The Neutral Interest Rate: Estimates for Chile

    OpenAIRE

    Rodrigo Fuentes S; Fabián Gredig U.

    2008-01-01

    To estimate the neutral real interest rate for Chile, we use a variety of methods that can be classified into three categories: those derived from economic theory, the neutral rate implicit in financial assets, and statistical procedures using macroeconomic data. We conclude that the neutral rate is not constant over time, but it is closely related with—though not equivalent to—the potential GDP growth rate. The application of the different methods yields fairly similar results. The neutral r...

  5. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  6. Estimating monotonic rates from biological data using local linear regression.

    Science.gov (United States)

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  7. Estimating stutter rates for Y-STR alleles

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Olofsson, Jill Katharina; Mogensen, Helle Smidt

    2011-01-01

    Stutter peaks are artefacts that arise during PCR amplification of short tandem repeats. Stutter peaks are especially important in forensic case work with DNA mixtures. The aim of the study was primarily to estimate the stutter rates of the AmpFlSTR Yfiler kit. We found that the stutter rates...

  8. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  9. Estimating forest conversion rates with annual forest inventory data

    Science.gov (United States)

    Paul C. Van Deusen; Francis A. Roesch

    2009-01-01

    The rate of land-use conversion from forest to nonforest or natural forest to forest plantation is of interest for forest certification purposes and also as part of the process of assessing forest sustainability. Conversion rates can be estimated from remeasured inventory plots in general, but the emphasis here is on annual inventory data. A new estimator is proposed...

  10. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  11. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  12. Understanding estimated worker absenteeism rates during an influenza pandemic.

    Science.gov (United States)

    Thanner, Meridith H; Links, Jonathan M; Meltzer, Martin I; Scheulen, James J; Kelen, Gabor D

    2011-01-01

    Published employee absenteeism estimates during an influenza pandemic range from 10 to 40 percent. The purpose of this study was to estimate daily employee absenteeism through the duration of an influenza pandemic and to determine the relative impact of key variables used to derive the estimates. Using the Centers for Disease Control and Prevention's FluWorkLoss program, the authors estimated the number of absent employees on any given day over the course of a simulated 8-week pandemic wave by using varying attack rates. Employee data from a university with a large academic health system were used. Sensitivity of the program outputs to variation in predictor (inputs) values was assessed. Finally, the authors examined and documented the algorithmic sequence of the program. Using a 35 percent attack rate, a total of 47,270 workdays (or 3.4 percent of all available workdays) would be lost over the course of an 8-week pandemic among a population of 35,026 employees. The highest (peak) daily absenteeism estimate was 5.8 percent (minimum 4.8 percent; maximum 7.4 percent). Sensitivity analysis revealed that varying days missed for nonhospitalized illness had the greatest potential effect on peak absence rate (3.1 to 17.2 percent). Peak absence with 15 and 25 percent attack rates were 2.5 percent and 4.2 percent, respectively. The impact of an influenza pandemic on employee availability may be less than originally thought, even with a high attack rate. These data are generalizable and are not specific to institutions of higher education or medical centers. Thus, these findings provide realistic and useful estimates for influenza pandemic planning for most organizations.

  13. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  14. A new approach for estimation of component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Kljenak, I.

    1999-01-01

    In the paper, a formal method for component failure rate estimation is described, which is proposed to be used for components, for which no specific numerical data necessary for probabilistic estimation exist. The framework of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas the likelihood distribution is assessed from specific data on component state using principles of fuzzy logic theory. With the proposed method the component failure rate estimation is based on a much larger quantity of information compared to presently used classical methods.(author)

  15. Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.

    Science.gov (United States)

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2017-07-01

    A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.

  16. Estimating contaminant discharge rates from stabilized uranium tailings embankments

    International Nuclear Information System (INIS)

    Weber, M.F.

    1986-01-01

    Estimates of contaminant discharge rates from stabilized uranium tailings embankments are essential in evaluating long-term impacts of tailings disposal on groundwater resources. Contaminant discharge rates are a function of water flux through tailings covers, the mass and distribution of tailings, and the concentrations of contaminants in percolating pore fluids. Simple calculations, laboratory and field testing, and analytical and numerical modeling may be used to estimate water flux through variably-saturated tailings under steady-state conditions, which develop after consolidation and dewatering have essentially ceased. Contaminant concentrations in water discharging from the tailings depend on tailings composition, leachability and solubility of contaminants, geochemical conditions within the embankment, tailings-water interactions, and flux of water through the embankment. These concentrations may be estimated based on maximum reported concentrations, pore water concentrations, extrapolations of column leaching data, or geochemical equilibria and reaction pathway modeling. Attempts to estimate contaminant discharge rates should begin with simple, conservative calculations and progress to more-complicated approaches, as necessary

  17. Simplifying cardiovascular risk estimation using resting heart rate.

    LENUS (Irish Health Repository)

    Cooney, Marie Therese

    2010-09-01

    Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.

  18. Estimating the exceedance probability of rain rate by logistic regression

    Science.gov (United States)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  19. Pooling overdispersed binomial data to estimate event rate.

    Science.gov (United States)

    Young-Xu, Yinong; Chan, K Arnold

    2008-08-19

    The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.

  20. Pooling overdispersed binomial data to estimate event rate

    Directory of Open Access Journals (Sweden)

    Chan K Arnold

    2008-08-01

    Full Text Available Abstract Background The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. Methods We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. Results In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. Conclusion The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.

  1. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  2. Using field feedback to estimate failure rates of safety-related systems

    International Nuclear Information System (INIS)

    Brissaud, Florent

    2017-01-01

    The IEC 61508 and IEC 61511 functional safety standards encourage the use of field feedback to estimate the failure rates of safety-related systems, which is preferred than generic data. In some cases (if “Route 2_H” is adopted for the 'hardware safety integrity constraints”), this is even a requirement. This paper presents how to estimate the failure rates from field feedback with confidence intervals, depending if the failures are detected on-line (called 'detected failures', e.g. by automatic diagnostic tests) or only revealed by proof tests (called 'undetected failures'). Examples show that for the same duration and number of failures observed, the estimated failure rates are basically higher for “undetected failures” because, in this case, the duration observed includes intervals of time where it is unknown that the elements have failed. This points out the need of using a proper approach for failure rates estimation, especially for failures that are not detected on-line. Then, this paper proposes an approach to use the estimated failure rates, with their uncertainties, for PFDavg and PFH assessment with upper confidence bounds, in accordance with IEC 61508 and IEC 61511 requirements. Examples finally show that the highest SIL that can be claimed for a safety function can be limited by the 90% upper confidence bound of PFDavg or PFH. The requirements of the IEC 61508 and IEC 61511 relating to the data collection and analysis should therefore be properly considered for the study of all safety-related systems. - Highlights: • This paper deals with requirements of the IEC 61508 and IEC 61511 for using field feedback to estimate failure rates of safety-related systems. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are detected on-line. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are only revealed by

  3. Automating proliferation rate estimation from Ki-67 histology images

    Science.gov (United States)

    Al-Lahham, Heba Z.; Alomari, Raja S.; Hiary, Hazem; Chaudhary, Vipin

    2012-03-01

    Breast cancer is the second cause of women death and the most diagnosed female cancer in the US. Proliferation rate estimation (PRE) is one of the prognostic indicators that guide the treatment protocols and it is clinically performed from Ki-67 histopathology images. Automating PRE substantially increases the efficiency of the pathologists. Moreover, presenting a deterministic and reproducible proliferation rate value is crucial to reduce inter-observer variability. To that end, we propose a fully automated CAD system for PRE from the Ki-67 histopathology images. This CAD system is based on a model of three steps: image pre-processing, image clustering, and nuclei segmentation and counting that are finally followed by PRE. The first step is based on customized color modification and color-space transformation. Then, image pixels are clustered by K-Means depending on the features extracted from the images derived from the first step. Finally, nuclei are segmented and counted using global thresholding, mathematical morphology and connected component analysis. Our experimental results on fifty Ki-67-stained histopathology images show a significant agreement between our CAD's automated PRE and the gold standard's one, where the latter is an average between two observers' estimates. The Paired T-Test, for the automated and manual estimates, shows ρ = 0.86, 0.45, 0.8 for the brown nuclei count, blue nuclei count, and proliferation rate, respectively. Thus, our proposed CAD system is as reliable as the pathologist estimating the proliferation rate. Yet, its estimate is reproducible.

  4. BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES

    Energy Technology Data Exchange (ETDEWEB)

    Iliadis, C.; Anderson, K. S. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States); Coc, A. [Centre de Sciences Nucléaires et de Sciences de la Matière (CSNSM), CNRS/IN2P3, Univ. Paris-Sud, Université Paris–Saclay, Bâtiment 104, F-91405 Orsay Campus (France); Timmes, F. X.; Starrfield, S., E-mail: iliadis@unc.edu [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1504 (United States)

    2016-11-01

    The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.

  5. On Estimating Marginal Tax Rates for U.S. States

    OpenAIRE

    Reed, W. Robert; Rogers, Cynthia L; Skidmore, Mark

    2011-01-01

    This paper presents a procedure for generating state-specific time-varying estimates of marginal tax rates (MTRs). Most estimates of MTRs follow a procedure developed by Koester and Kormendi (1989) (K&K). Unfortunately, the time-invariant nature of the K&K estimates precludes their use as explanatory variables in panel data studies with fixed effects. Furthermore, the associated MTR estimates are not explicitly linked to statutory tax parameters. Our approach addresses both shortcomings. Usin...

  6. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  8. Distributions of component failure rates, estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter γ distributions. In this study, a more complicated distributional form is considered, a mixture of γs. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single γ distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution

  9. Estimation of uncertainty in tracer gas measurement of air change rates.

    Science.gov (United States)

    Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio

    2010-12-01

    Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.

  10. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    Science.gov (United States)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  11. Distributions of component failure rates estimated from LER data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1985-01-01

    Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter gamma distributions. In this study, a more complicated distributional form is considered, a mixture of gammas. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single gamma distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution. 9 refs

  12. Genetic analysis of rare disorders: Bayesian estimation of twin concordance rates

    NARCIS (Netherlands)

    van den Berg, Stéphanie Martine; Hjelmborg, J.

    2012-01-01

    Twin concordance rates provide insight into the possibility of a genetic background for a disease. These concordance rates are usually estimated within a frequentistic framework. Here we take a Bayesian approach. For rare diseases, estimation methods based on asymptotic theory cannot be applied due

  13. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  14. Estimation of Stormwater Interception Rate for various LID Facilities

    Science.gov (United States)

    Kim, S.; Lee, O.; Choi, J.

    2017-12-01

    In this study, the stormwater interception rate is proposed to apply in the design of LID facilities. For this purpose, EPA-SWMM is built with some areas of Noksan National Industrial Complex where long-term observed stormwater data were monitored and stormwater interception rates for various design capacities of various LID facilities are estimated. While the sensitivity of stormwater interception rate according to design specifications of bio-retention and infiltration trench facilities is not large, the sensitivity of stormwater interception rate according to local rainfall characteristics is relatively big. As a result of comparing the present rainfall interception rate estimation method which is officially operated in Korea with the one proposed in this study, it will be presented that the present method is highly likely to overestimate the performance of the bio-retention and infiltration trench facilities. Finally, a new stormwater interception rate formulas for the bio-retention and infiltration trench LID facilities will be proposed. Acknowledgement This research was supported by a grant (2016000200002) from Public Welfare Technology Development Program funded by Ministry of Environment of Korean government.

  15. Automatic estimation of pressure-dependent rate coefficients.

    Science.gov (United States)

    Allen, Joshua W; Goldsmith, C Franklin; Green, William H

    2012-01-21

    A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation.

  16. Automatic estimation of pressure-dependent rate coefficients

    KAUST Repository

    Allen, Joshua W.; Goldsmith, C. Franklin; Green, William H.

    2012-01-01

    A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation. This journal is © the Owner Societies.

  17. Estimated Interest Rate Rules: Do they Determine Determinacy Properties?

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...

  18. Rating curve estimation of nutrient loads in Iowa rivers

    Science.gov (United States)

    Stenback, G.A.; Crumpton, W.G.; Schilling, K.E.; Helmers, M.J.

    2011-01-01

    Accurate estimation of nutrient loads in rivers and streams is critical for many applications including determination of sources of nutrient loads in watersheds, evaluating long-term trends in loads, and estimating loading to downstream waterbodies. Since in many cases nutrient concentrations are measured on a weekly or monthly frequency, there is a need to estimate concentration and loads during periods when no data is available. The objectives of this study were to: (i) document the performance of a multiple regression model to predict loads of nitrate and total phosphorus (TP) in Iowa rivers and streams; (ii) determine whether there is any systematic bias in the load prediction estimates for nitrate and TP; and (iii) evaluate streamflow and concentration factors that could affect the load prediction efficiency. A commonly cited rating curve regression is utilized to estimate riverine nitrate and TP loads for rivers in Iowa with watershed areas ranging from 17.4 to over 34,600km2. Forty-nine nitrate and 44 TP datasets each comprising 5-22years of approximately weekly to monthly concentrations were examined. Three nitrate data sets had sample collection frequencies averaging about three samples per week. The accuracy and precision of annual and long term riverine load prediction was assessed by direct comparison of rating curve load predictions with observed daily loads. Significant positive bias of annual and long term nitrate loads was detected. Long term rating curve nitrate load predictions exceeded observed loads by 25% or more at 33% of the 49 measurement sites. No bias was found for TP load prediction although 15% of the 44 cases either underestimated or overestimate observed long-term loads by more than 25%. The rating curve was found to poorly characterize nitrate and phosphorus variation in some rivers. ?? 2010 .

  19. Contemporary management of carotid blowout syndrome utilizing endovascular techniques.

    Science.gov (United States)

    Manzoor, Nauman F; Rezaee, Rod P; Ray, Abhishek; Wick, Cameron C; Blackham, Kristine; Stepnick, David; Lavertu, Pierre; Zender, Chad A

    2017-02-01

    To illustrate complex interdisciplinary decision making and the utility of modern endovascular techniques in the management of patients with carotid blowout syndrome (CBS). Retrospective chart review. Patients treated with endovascular strategies and/or surgical modalities were included. Control of hemorrhage, neurological, and survival outcomes were studied. Between 2004 and 2014, 33 patients had 38 hemorrhagic events related to head and neck cancer that were managed with endovascular means. Of these, 23 were localized to the external carotid artery (ECA) branches and five localized to the ECA main trunk; nine were related to the common carotid artery (CCA) or internal carotid artery (ICA), and one event was related to the innominate artery. Seven events related to the CCA/ICA or innominate artery were managed with endovascular sacrifice, whereas three cases were managed with a flow-preserving approach (covered stent). Only one patient developed permanent hemiparesis. In two of the three cases where the flow-preserving approach was used, the covered stent eventually became exposed via the overlying soft tissue defect, and definitive management using carotid revascularization or resection was employed to prevent further hemorrhage. In cases of soft tissue necrosis, vascularized tissues were used to cover the great vessels as applicable. The use of modern endovascular approaches for management of acute CBS yields optimal results and should be employed in a coordinated manner by the head and neck surgeon and the neurointerventionalist. 4. Laryngoscope, 2016 127:383-390, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Estimating Ads’ Click through Rate with Recurrent Neural Network

    Directory of Open Access Journals (Sweden)

    Chen Qiao-Hong

    2016-01-01

    Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.

  1. Simple method for the estimation of glomerular filtration rate

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)

    1977-02-01

    A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.

  2. Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach

    KAUST Repository

    Ballal, Tarig

    2014-01-01

    This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.

  3. Blowout heroes : while his crew mate risked fire and falling rock to save him, Curly Slater fought right to the last breath

    Energy Technology Data Exchange (ETDEWEB)

    Louie, J.

    2008-11-15

    This article discussed a blow-out that resulted in a fatality at a drilling rig near Helmut, British Columbia (BC). The blowout caused a column of fire, drilling mud, and rock to explode from the floor of the rig. As the rig's derrickman was trapped when drilling mud caked onto his easy rider cable and froze the escape sliding mechanism. The derrickman fell and was left dangling by his harness. Crew members spotted him from a distance and returned to the site in order to unhook the easy rider cable from the manifold shack and move the steel line to a location that would allow him to descend more easily. The derrickman was unable to un-jam his fall arrestor. He freed himself using his own strength, seized a cable, but lost his grip while still 75 feet in the air and fell to his death. The tragedy was caused by the presence of a very large gas pocket encountered at an unusually shallow depth. Several of the crew members have been unable to continue working in the oil and gas industry, due to the psychological trauma of witnessing the man's death. The Canadian Association of Oilwell Drilling Contractors (CAODC) has issued awards of merit for the crew men who attempted to save the derrickman, and the entire crew will receive medals of bravery from the Governor General of Canada. 2 figs.

  4. High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees

    Science.gov (United States)

    Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido

    2012-01-01

    Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349

  5. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  6. New methods for estimating follow-up rates in cohort studies

    Directory of Open Access Journals (Sweden)

    Xiaonan Xue

    2017-12-01

    Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and

  7. Application of fine managed pressure drilling technique in complex wells with both blowout and lost circulation risks

    Directory of Open Access Journals (Sweden)

    Ling Yan

    2015-03-01

    Full Text Available Fractured carbonate reservoirs are susceptible to blowout and lost circulation during drilling, which not only restricts drilling speed, but also poses big threat to well control. Moreover, there are few technical means available to reconstruct pressure balance in the borehole. Accordingly, the fine managed pressure drilling was used in the drilling of Well GS19 in the Qixia Formation with super-high pressure and narrow density window, which is a success: ① back pressure in the annular spaces will be adjusted to maintain a slightly over-balanced bottom-hole hydraulic pressure, and fluid level in the circulation tank will be kept in a slight dropping state to ensure that natural gas in the formation would not invade into the borehole in a massive volume; ② inlet drilling fluid density will be controlled at around 2.35 g/cm3, back pressures in the annular be maintained at 2–5 MPa, and bottom-hole pressure equivalent circulation density be controlled at 2.46–2.52 g/cm3; ③ during managed pressure drilling operations, if wellhead pressure exceeds or expects to exceed 7 MPa, semi-blind rams will be closed. Fluids will pass through the choke manifold of the rig to the choke manifold specifically for pressure control before entering gas/liquid separators to discharge gas; ④ during tripping back pressure will be kept at less than 5 MPa, volume of injected drilling fluid will be higher than the theoretical volume during tripping out, whereas the volume of returned drilling fluid will be higher than the theoretical volume during the out-tripping. This technique has been applied successfully in the drilling of the Qixia Formation, Liangshan Formation and Longmaxi Formation with a total footage of 216.60 m, as a good attempt in complicated wells with both blowout and lost circulation risks, which can provide valuable experiences and guidance for handling similar complexities in the future.

  8. Evolution of the Macondo well blowout: simulating the effects of the circulation and synthetic dispersants on the subsea oil transport.

    Science.gov (United States)

    Paris, Claire B; Hénaff, Matthieu Le; Aman, Zachary M; Subramaniam, Ajit; Helgers, Judith; Wang, Dong-Ping; Kourafalou, Vassiliki H; Srinivasan, Ashwanth

    2012-12-18

    During the Deepwater Horizon incident, crude oil flowed into the Gulf of Mexico from 1522 m underwater. In an effort to prevent the oil from rising to the surface, synthetic dispersants were applied at the wellhead. However, uncertainties in the formation of oil droplets and difficulties in measuring their size in the water column, complicated further assessment of the potential effect of the dispersant on the subsea-to-surface oil partition. We adapted a coupled hydrodynamic and stochastic buoyant particle-tracking model to the transport and fate of hydrocarbon fractions and simulated the far-field transport of the oil from the intrusion depth. The evaluated model represented a baseline for numerical experiments where we varied the distributions of particle sizes and thus oil mass. The experiments allowed to quantify the relative effects of chemical dispersion, vertical currents, and inertial buoyancy motion on oil rise velocities. We present a plausible model scenario, where some oil is trapped at depth through shear emulsification due to the particular conditions of the Macondo blowout. Assuming effective mixing of the synthetic dispersants at the wellhead, the model indicates that the submerged oil mass is shifted deeper, decreasing only marginally the amount of oil surfacing. In this scenario, the oil rises slowly to the surface or stays immersed. This suggests that other mechanisms may have contributed to the rapid surfacing of oil-gas mixture observed initially. The study also reveals local topographic and hydrodynamic processes that influence the oil transport in eddies and multiple layers. This numerical approach provides novel insights on oil transport mechanisms from deep blowouts and on gauging the subsea use of synthetic dispersant in mitigating coastal damage.

  9. Estimating Glomerular Filtration Rate in Older People

    Directory of Open Access Journals (Sweden)

    Sabrina Garasto

    2014-01-01

    Full Text Available We aimed at reviewing age-related changes in kidney structure and function, methods for estimating kidney function, and impact of reduced kidney function on geriatric outcomes, as well as the reliability and applicability of equations for estimating glomerular filtration rate (eGFR in older patients. CKD is associated with different comorbidities and adverse outcomes such as disability and premature death in older populations. Creatinine clearance and other methods for estimating kidney function are not easy to apply in older subjects. Thus, an accurate and reliable method for calculating eGFR would be highly desirable for early detection and management of CKD in this vulnerable population. Equations based on serum creatinine, age, race, and gender have been widely used. However, these equations have their own limitations, and no equation seems better than the other ones in older people. New equations specifically developed for use in older populations, especially those based on serum cystatin C, hold promises. However, further studies are needed to definitely accept them as the reference method to estimate kidney function in older patients in the clinical setting.

  10. Using 210Pb measurements to estimate sedimentation rates on river floodplains

    International Nuclear Information System (INIS)

    Du, P.; Walling, D.E.

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides 137 Cs and excess 210 Pb to estimate medium-term (10–10 2 years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of 137 Cs. However, the use of excess 210 Pb potentially offers a number of advantages over 137 Cs measurements. Most existing investigations that have used excess 210 Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess 210 Pb and 137 Cs were made on these cores. The 210 Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The 137 Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the 210 Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total 210 Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by the 137 Cs and excess 210 Pb

  11. Incorporating a Time Horizon in Rate-of-Return Estimations: Discounted Cash Flow Model in Electric Transmission Rate Cases

    International Nuclear Information System (INIS)

    Chatterjee, Bishu; Sharp, Peter A.

    2006-01-01

    Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)

  12. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  13. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  14. The reliability of grazing rate estimates from dilution experiments: Have we over-estimated rates of organic carbon consumption by microzooplankton?

    Directory of Open Access Journals (Sweden)

    J. R. Dolan,

    2005-01-01

    Full Text Available According to a recent global analysis, microzooplankton grazing is surprisingly invariant, ranging only between 59 and 74% of phytoplankton primary production across systems differing in seasonality, trophic status, latitude, or salinity. Thus an important biological process in the world ocean, the daily consumption of recently fixed carbon, appears nearly constant. We believe this conclusion is an artefact because dilution experiments are 1 prone to providing over-estimates of grazing rates and 2 unlikely to furnish evidence of low grazing rates. In our view the overall average rate of microzooplankton grazing probably does not exceed 50% of primary production and may be even lower in oligotrophic systems.

  15. State estimation for networked control systems using fixed data rates

    Science.gov (United States)

    Liu, Qing-Quan; Jin, Fang

    2017-07-01

    This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.

  16. Child mortality estimation: consistency of under-five mortality rate estimates using full birth histories and summary birth histories.

    Directory of Open Access Journals (Sweden)

    Romesh Silva

    Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality

  17. Clinical use of estimated glomerular filtration rate for evaluation of kidney function

    DEFF Research Database (Denmark)

    Broberg, Bo; Lindhardt, Morten; Rossing, Peter

    2013-01-01

    is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...

  18. Estimated erosion rate at the SRP burial ground

    International Nuclear Information System (INIS)

    Horton, J.H.; Wilhite, E.L.

    1978-04-01

    The rate of soil erosion at the Savannah River Plant (SRP) burial ground can be calculated by means of the universal soil loss equation. Erosion rates estimated by the equation are more suitable for long-term prediction than those which could be measured with a reasonable effort in field studies. The predicted erosion rate at the SRP burial ground ranges from 0.0007 cm/year under stable forest cover to 0.38 cm/year if farmed with cultivated crops. These values correspond to 170,000 and 320 years, respectively, to expose waste buried 4 ft deep

  19. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  20. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  1. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  2. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  3. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  4. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    Science.gov (United States)

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates

  5. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  6. Estimates of particle- and thorium-cycling rates in the northwest Atlantic Ocean

    International Nuclear Information System (INIS)

    Murnane, R.J.; Sarmiento, J.L.; Cochran, J.K.

    1994-01-01

    The authors provide least squares estimates of particle-cycling rate constants and their errors at 13 depths in the Northwest Atlantic Ocean using a compilation of published results and conservation equations for thorium and particle cycling. The predicted rates of particle aggregation and disaggregation vary through the water column. The means and standard deviations, based on lognormal probability distributions, for the lowest and highest rates of aggregation (β 2 ) and disaggregation (β -2 ) in the water column are 8±27 y -1 2 -1 , and 580±2000 y -1 -2 3 ±10 4 y -1 . Median values for these rates are 2.1 y -1 2 -1 , and 149 y -1 -2 -1 . Predicted rate constants for thorium adsorption (k 1 = 5.0±1.0x10 4 m 3 kg -1 y -1 ) and desorption (k -1 = 3.1±1.5 y -1 ) are consistent with previous estimates. Least squares estimates of the sum of the time dependence and transport terms from the particle and thorium conservation equations are on the same order as other terms in the conservation equations. Forcing this sum to equal zero would change the predicted rates. Better estimates of the time dependence of thorium activities and particle concentrations and of the concentration and flux of particulate organic matter would help to constrain estimates of β 2 and β -2 . 46 refs., 8 figs., 5 tabs

  7. Simple estimate of fission rate during JCO criticality accident

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Faculty of Studies on Contemporary Society, Aichi Shukutoku Univ., Nagakute, Aichi (Japan)

    2000-03-01

    The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10{sup 16} per liter, or 2x10{sup 18} per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)

  8. Simple estimate of fission rate during JCO criticality accident

    International Nuclear Information System (INIS)

    Oyamatsu, Kazuhiro

    2000-01-01

    The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10 16 per liter, or 2x10 18 per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)

  9. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    Science.gov (United States)

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  10. Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates

    Directory of Open Access Journals (Sweden)

    Atsushi Iizuka

    2010-12-01

    Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of

  11. Fill rate estimation in periodic review policies with lost sales using simple methods

    Energy Technology Data Exchange (ETDEWEB)

    Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.

    2016-07-01

    Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.

  12. Convergence Rate Analysis of Distributed Gossip (Linear Parameter) Estimation: Fundamental Limits and Tradeoffs

    Science.gov (United States)

    Kar, Soummya; Moura, José M. F.

    2011-08-01

    The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.

  13. The Estimation of the Equilibrium Real Exchange Rate for Romania

    OpenAIRE

    Bogdan Andrei Dumitrescu; Vasile Dedu

    2009-01-01

    This paper aims to estimate the equilibrium real exchange rate for Romania, respectively the real exchange rate consistent with the macroeconomic balance, which is achieved when the economy is operating at full employment and low inflation (internal balance) and has a current account that is sustainable (external balance). This equilibrium real exchange rate is very important for an economy because deviations of the real exchange rate from its equilibrium value can affect the competitiveness ...

  14. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  15. Estimation of flow rates through intergranular stress corrosion cracks

    International Nuclear Information System (INIS)

    Collier, R.P.; Norris, D.M.

    1984-01-01

    Experimental studies of critical two-phase water flow, through simulated and actual intergranular stress corrosion cracks, were performed to obtain data to evaluate a leak flow rate model and investigate acoustic transducer effectiveness in detecting and sizing leaks. The experimental program included a parametric study of the effects of crack geometry, fluid stagnation pressure and temperature, and crack surface roughness on leak flow rate. In addition, leak detection, location, and leak size estimation capabilities of several different acoustic transducers were evaluated as functions of leak rate and transducer position. This paper presents flow rate data for several different cracks and fluid conditions. It also presents the minimum flow rate detected with the acoustic sensors and a relationship between acoustic signal strength and leak flow rate

  16. Oxygen transfer rate estimation in oxidation ditches from clean water measurements.

    Science.gov (United States)

    Abusam, A; Keesman, K J; Meinema, K; Van Straten, G

    2001-06-01

    Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).

  17. Counting and confusion: Bayesian rate estimation with multiple populations

    Science.gov (United States)

    Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt

    2015-01-01

    We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.

  18. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  19. Application of a conversion factor to estimate the adenoma detection rate from the polyp detection rate.

    LENUS (Irish Health Repository)

    Francis, Dawn L

    2011-03-01

    The adenoma detection rate (ADR) is a quality benchmark for colonoscopy. Many practices find it difficult to determine the ADR because it requires a combination of endoscopic and histologic findings. It may be possible to apply a conversion factor to estimate the ADR from the polyp detection rate (PDR).

  20. Precise estimates of mutation rate and spectrum in yeast

    Science.gov (United States)

    Zhu, Yuan O.; Siegal, Mark L.; Hall, David W.; Petrov, Dmitri A.

    2014-01-01

    Mutation is the ultimate source of genetic variation. The most direct and unbiased method of studying spontaneous mutations is via mutation accumulation (MA) lines. Until recently, MA experiments were limited by the cost of sequencing and thus provided us with small numbers of mutational events and therefore imprecise estimates of rates and patterns of mutation. We used whole-genome sequencing to identify nearly 1,000 spontaneous mutation events accumulated over ∼311,000 generations in 145 diploid MA lines of the budding yeast Saccharomyces cerevisiae. MA experiments are usually assumed to have negligible levels of selection, but even mild selection will remove strongly deleterious events. We take advantage of such patterns of selection and show that mutation classes such as indels and aneuploidies (especially monosomies) are proportionately much more likely to contribute mutations of large effect. We also provide conservative estimates of indel, aneuploidy, environment-dependent dominant lethal, and recessive lethal mutation rates. To our knowledge, for the first time in yeast MA data, we identified a sufficiently large number of single-nucleotide mutations to measure context-dependent mutation rates and were able to (i) confirm strong AT bias of mutation in yeast driven by high rate of mutations from C/G to T/A and (ii) detect a higher rate of mutation at C/G nucleotides in two specific contexts consistent with cytosine methylation in S. cerevisiae. PMID:24847077

  1. Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties

    Science.gov (United States)

    M. Dean Knighton

    1978-01-01

    Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.

  2. Estimation of glandular content rate and statistical analysis of the influence of age group and compressed breast thickness on the estimated value

    International Nuclear Information System (INIS)

    Ohmori, Naoki; Ashida, Kenji; Fujita, Osamu

    2003-01-01

    Because the glandular content rate is an important factor in evaluating breast cancer detection and average glandular dose, it is important in mammography research to estimate and analyze this rate. The purpose of this study was to obtain a formula for statistical estimation of the glandular content rate, to clarify statistically the influence of age group and compressed breast thickness (CBT) on estimating the glandular content rate, and to show statistically the general relation between glandular content rate and the factors of age and CBT. The subjects were 740 Japanese women aged 20-91 years (mean±SD: 48.3±12.8 years) who had undergone mammography. In our study, the glandular content rate was statistically estimated from age group, mAs-value, and CBT when subjects underwent mammography, from a phantom simulation, and from MR images of the breast. In addition, multivariate analysis was carried to examine statistically the influence of age group and CBT on glandular content rate. The mean glandular content rate as estimated by age group was as follows: 35.6% for those in their 20s, 33.4% in the 30s, 27.5% in the 40s, 23.8% in the 50s, and 21.8% in those 60 and over. The rate for the subjects as a whole was 27.1%. This study indicated that overestimation occurred if the estimated value of the glandular content rate was not corrected in the 3D-measurement by MRI. In addition, this study showed that the statistical influence on glandular content rate was significantly larger for CBT than age. (author)

  3. Evidence-based estimate of appropriate radiotherapy utilization rate for prostate cancer

    International Nuclear Information System (INIS)

    Foroudi, Farshad; Tyldesley, Scott; Barbera, Lisa; Huang, Jenny; Mackillop, William J.

    2003-01-01

    Purpose: Current estimates of the proportion of cancer patients who will require radiotherapy (RT) are based almost entirely on expert opinion. The objective of this study was to use an evidence-based approach to estimate the proportion of incident cases of prostate cancer that should receive RT at any point in the evolution of the illness. Methods and Materials: A systematic review of the literature was undertaken to identify indications for RT for prostate cancer and to ascertain the level of evidence that supported each indication. An epidemiologic approach was then used to estimate the incidence of each indication for RT in a typical North American population of prostate cancer patients. The effect of sampling error on the estimated appropriate rate of RT was calculated mathematically, and the effect of systematic error using alternative sources of information was estimated by sensitivity analysis. Results: It was estimated that 61.2% ±5.6% of prostate cancer cases develop one or more indications for RT at some point in the course of the illness. The plausible range for this rate was 57.3%-69.8% on sensitivity analysis. Of all prostate cancer patients, 32.2%±3.8% should receive RT in their initial treatment and 29.0% ± 4.1% later for recurrence or progression. The proportion of cases that ever require RT is risk grouping dependent; 43.9%±2.2% in low-risk disease, 68.7%± .5% in intermediate-risk disease; and 79.0% ± 3.8% in high-risk locoregional disease. For metastatic disease, the predicted rate was 66.4%±0.3%. Conclusion: This method provides a rational starting point for the long-term planning of radiation services and for the audit of access to RT at the population level. By completing such evaluations in major cancer sites, it will be possible to estimate the appropriate RT rate for the cancer population as a whole

  4. Common cause failure rate estimates for diesel generators in nuclear power plants

    International Nuclear Information System (INIS)

    Steverson, J.A.; Atwood, C.L.

    1982-01-01

    Common cause fault rates for diesel generators in nuclear power plants are estimated, using Licensee Event Reports for the years 1976 through 1978. The binomial failure rate method, used for obtaining the estimates, is briefly explained. Issues discussed include correct classification of common cause events, grouping of the events into homogeneous data subsets, and dealing with plant-to-plant variation

  5. A Pulse Rate Estimation Algorithm Using PPG and Smartphone Camera.

    Science.gov (United States)

    Siddiqui, Sarah Ali; Zhang, Yuan; Feng, Zhiquan; Kos, Anton

    2016-05-01

    The ubiquitous use and advancement in built-in smartphone sensors and the development in big data processing have been beneficial in several fields including healthcare. Among the basic vitals monitoring, pulse rate monitoring is the most important healthcare necessity. A multimedia video stream data acquired by built-in smartphone camera can be used to estimate it. In this paper, an algorithm that uses only smartphone camera as a sensor to estimate pulse rate using PhotoPlethysmograph (PPG) signals is proposed. The results obtained by the proposed algorithm are compared with the actual pulse rate and the maximum error found is 3 beats per minute. The standard deviation in percentage error and percentage accuracy is found to be 0.68 % whereas the average percentage error and percentage accuracy is found to be 1.98 % and 98.02 % respectively.

  6. Estimation of Leak Rate Through Cracks in Bimaterial Pipes in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Jai Hak Park

    2016-10-01

    Full Text Available The accurate estimation of leak rate through cracks is crucial in applying the leak before break (LBB concept to pipeline design in nuclear power plants. Because of its importance, several programs were developed based on the several proposed flow models, and used in nuclear power industries. As the flow models were developed for a homogeneous pipe material, however, some difficulties were encountered in estimating leak rates for bimaterial pipes. In this paper, a flow model is proposed to estimate leak rate in bimaterial pipes based on the modified Henry–Fauske flow model. In the new flow model, different crack morphology parameters can be considered in two parts of a flow path. In addition, based on the proposed flow model, a program was developed to estimate leak rate for a crack with linearly varying cross-sectional area. Using the program, leak rates were calculated for through-thickness cracks with constant or linearly varying cross-sectional areas in a bimaterial pipe. The leak rate results were then compared and discussed in comparison with the results for a homogeneous pipe. The effects of the crack morphology parameters and the variation in cross-sectional area on the leak rate were examined and discussed.

  7. Fast Rate Estimation for RDO Mode Decision in HEVC

    Directory of Open Access Journals (Sweden)

    Maxim P. Sharabayko

    2014-12-01

    Full Text Available The latter-day H.265/HEVC video compression standard is able to provide two-times higher compression efficiency compared to the current industrial standard, H.264/AVC. However, coding complexity also increased. The main bottleneck of the compression process is the rate-distortion optimization (RDO stage, as it involves numerous sequential syntax-based binary arithmetic coding (SBAC loops. In this paper, we present an entropy-based RDO estimation technique for H.265/HEVC compression, instead of the common approach based on the SBAC. Our RDO implementation reduces RDO complexity, providing an average bit rate overhead of 1.54%. At the same time, elimination of the SBAC from the RDO estimation reduces block interdependencies, thus providing an opportunity for the development of the compression system with parallel processing of multiple blocks of a video frame.

  8. Estudo do efeito de um "Riser Blowout" na perfuração de poços de petroleo em aguas profundas

    OpenAIRE

    Heitor Rodrigues de Paula Lima

    1991-01-01

    Resumo: Esta dissertação apresenta uma análise do comportamento da pressão dos fluidos que ocorre durante o especial acidente na perfuração de um poço submarino de petróleo conhecido como "Riser Blowout" ( * ). O modelo usado neste estudo baseia-se na solução do sistema de equações diferenciais parciais que governam o fluxo ascendente da mistura bifásica fluido de perfuração/ gás dentro do espaço anular entre a coluna de perfuração e o "marine riser". As perdas de carga devido a aceleração do...

  9. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  10. Probabilistic estimation of residential air exchange rates for ...

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  11. GARDEC, Estimation of dose-rates reduction by garden decontamination

    International Nuclear Information System (INIS)

    Togawa, Orihiko

    2006-01-01

    1 - Description of program or function: GARDEC estimates the reduction of dose rates by garden decontamination. It provides the effect of different decontamination Methods, the depth of soil to be considered, dose-rate before and after decontamination and the reduction factor. 2 - Methods: This code takes into account three Methods of decontamination : (i)digging a garden in a special way, (ii) a removal of the upper layer of soil, and (iii) covering with a shielding layer of soil. The dose-rate conversion factor is defined as the external dose-rate, in the air, at a given height above the ground from a unit concentration of a specific radionuclide in each soil layer

  12. Enhancement of leak rate estimation model for corroded cracked thin tubes

    International Nuclear Information System (INIS)

    Chang, Y.S.; Jeong, J.U.; Kim, Y.J.; Hwang, S.S.; Kim, H.P.

    2010-01-01

    During the last couple of decades, lots of researches on structural integrity assessment and leak rate estimation have been carried out to prevent unanticipated catastrophic failures of pressure retaining nuclear components. However, from the standpoint of leakage integrity, there are still some arguments for predicting the leak rate of cracked components due primarily to uncertainties attached to various parameters in flow models. The purpose of present work is to suggest a leak rate estimation method for thin tubes with artificial cracks. In this context, 23 leak rate tests are carried out for laboratory generated stress corrosion cracked tube specimens subjected to internal pressure. Engineering equations to calculate crack opening displacements are developed from detailed three-dimensional elastic-plastic finite element analyses and then a simplified practical model is proposed based on the equations as well as test data. Verification of the proposed method is done through comparing leak rates and it will enable more reliable design and/or operation of thin tubes.

  13. Effect of selecting a fixed dephosphorylation rate on the estimation of rate constants and rCMRGlu from dynamic [18F] fluorodeoxyglucose/PET data

    International Nuclear Information System (INIS)

    Dhawan, V.; Moeller, J.R.; Strother, S.C.; Evans, A.C.; Rottenberg, D.A.

    1989-01-01

    Several publications have discussed the estimation and physiologic significance of regional [ 18 F]fluorodeoxyglucose (FDG) rate constants and metabolic rates. Most of these studies analyzed dynamic data collected over 45-60 min; three rate constants (k1-k3) and blood volume (Vb) were estimated and the regional cerebral metabolic rate for glucose (rCMRGlu) was subsequently derived using the measured blood glucose value and a regionally invariant value of the lumped constant (LC). The dephosphorylation rate constant (k4) was either neglected, or a fixed value was used in the estimation procedure to obtain the remaining parameters. To compare the rate constants obtained by different authors using different values of k4 is impossible without knowledge of the effect of selecting different fixed values of k4 (including zero) on the estimated rate constants and rCMRGlu. Based on our analysis of FDG/PET data from nine normal volunteer subjects, we conclude that inclusion of a fixed value for k4, in spite of a scaling effect on the absolute values of model parameters, has no effect on the coefficient of variation (CV) of within- and between-subject parameter estimates and glucose metabolic rates

  14. Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Nihar Jindal

    2008-03-01

    Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.

  15. Microcephaly Case Fatality Rate Associated with Zika Virus Infection in Brazil: Current Estimates.

    Science.gov (United States)

    Cunha, Antonio José Ledo Alves da; de Magalhães-Barbosa, Maria Clara; Lima-Setta, Fernanda; Medronho, Roberto de Andrade; Prata-Barbosa, Arnaldo

    2017-05-01

    Considering the currently confirmed cases of microcephaly and related deaths associated with Zika virus in Brazil, the estimated case fatality rate is 8.3% (95% confidence interval: 7.2-9.6). However, a third of the reported cases remain under investigation. If the confirmation rates of cases and deaths are the same in the future, the estimated case fatality rate will be as high as 10.5% (95% confidence interval: 9.5-11.7).

  16. Lidar method to estimate emission rates from extended sources

    Science.gov (United States)

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  17. Preventing Tire Blowout Accidents: A Perspective on Factors Affecting Drivers’ Intention to Adopt Tire Pressure Monitoring System

    Directory of Open Access Journals (Sweden)

    Kai-Ying Chen

    2018-04-01

    Full Text Available The aim of this study is to explore whether risk perception or anticipated regret is responsible for intensifying the participants’ intention to adopt a tire pressure monitoring system (TPMS to prevent a tire-related accident, and whether the optimism bias has a moderator effect between risk perception/anticipated regret and intention. With 274 valid questionnaires and PLS-SEM (partial least squares structural equation modeling analysis, the results indicate a significant positive relationship between risk perception and intention to adopt TPMS, but not between anticipated regret and intention. The moderator effect of optimism bias on risk perception and anticipated regret is not found in the model. The findings will prove useful for public service advertising campaigns by providing a basis for an understanding of the role of cognitive and emotional factors in tire-blowout accident prevention, thereby increasing the motivation for drivers in Taiwan to take advantage of the protection afforded them by using TPMS.

  18. Comparative risk assessment of spill response options for a deepwater oil well blowout: Part III. Stakeholder engagement.

    Science.gov (United States)

    Walker, Ann Hayward; Scholz, Debra; McPeek, Melinda; French-McCay, Deborah; Rowe, Jill; Bock, Michael; Robinson, Hilary; Wenning, Richard

    2018-05-25

    This paper describes oil spill stakeholder engagement in a recent comparative risk assessment (CRA) project that examined the tradeoffs associated with a hypothetical offshore well blowout in the Gulf of Mexico, with a specific focus on subsea dispersant injection (SSDI) at the wellhead. SSDI is a new technology deployed during the Deepwater Horizon (DWH) oil spill response. Oil spill stakeholders include decision makers, who will consider whether to integrate SSDI into future tradeoff decisions. This CRA considered the tradeoffs associated with three sets of response strategies: (1) no intervention; (2) mechanical recovery, in-situ burning, and surface dispersants; and, (3) SSDI in addition to responses in (2). For context, the paper begins with a historical review of U.S. policy and engagement with oil spill stakeholders regarding dispersants. Stakeholder activities throughout the project involved decision-maker representatives and their advisors to inform the approach and consider CRA utility in future oil spill preparedness. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Redefinition and global estimation of basal ecosystem respiration rate

    Science.gov (United States)

    Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.

    2011-01-01

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.

  20. Comparing avian and bat fatality rate estimates among North American wind energy projects

    Energy Technology Data Exchange (ETDEWEB)

    Smallwood, Shawn

    2011-07-01

    Full text: Wind energy development has expanded rapidly, and so have concerns over bird and bat impacts caused by wind turbines. To assess and compare impacts due to collisions, investigators use a common metric, fatalities/MW/year, but estimates of fatality rates have come from various wind turbine models, tower heights, environments, fatality search methods, and analytical methods. To improve comparability and asses large-scale impacts, I applied a common set of assumptions and methods to data in fatality monitoring reports to estimate fatality rates of birds and bats at 71 wind projects across North America (52 outside the Altamont Pass Wind Resource Area, APWRA). The data were from wind turbines of 27 sizes (range 0.04-3.00 MW) and 28 tower heights (range 18.5-90 m), and searched at 40 periodic intervals (range 1-90 days) and out to 20 distances from turbines (range 30-126 m). Estimates spanned the years 1982 to 2010, and involved 1-1,345 turbines per unique combination of project, turbine size, tower height, and search methodology. I adjusted fatality rates for search detection rates averaged from 425 detection trials, and for scavenger removal rates based on 413 removal trials. I also adjusted fatality rates for turbine tower height and maximum search radius, based on logistic functions fit to cumulative counts of carcasses that were detected at 1-m distance intervals from the turbine. For each tower height, I estimated the distance at which cumulative carcass counts reached an asymptote, and for each project I calculated the proportion of fatalities likely not found due to the maximum search radius being short of the model-predicted distance asymptote. I used the same estimator in all cases. I estimated mean fatalities/MW/year among North American wind projects at 12.6 bats (80% CI: 8.1-17.1) and 11.1 birds (80% CI: 9.5-12.7), including 1.6 raptors (80% CI: 1.3-2.0), and excluding the Altamont Pass I estimated fatality rates at 17.2 bats (80% CI: 9

  1. Real time risk analysis of kick detection: Testing and validation

    International Nuclear Information System (INIS)

    Islam, Rakibul; Khan, Faisal; Venkatesan, Ramchandran

    2017-01-01

    Oil and gas development is moving into harsh and remote locations where the highest level of safety is required. A blowout is one of the most feared accidents in oil and gas developments projects. The main objective of this paper is to test and validate the kick detection of blowout risk assessment model using uniquely developed experimental results. Kick detection is a major part of the blowout risk assessment model. The accuracy and timeliness of kick detection are dependent on the monitoring of multiple downhole parameters such as downhole pressure, fluid density, fluid conductivity and mass flow rate. In the present study these four parameters are considered in different logical combinations to assess the occurrence of kick and associated blowout risk. The assessed results are compared against the experimental observations. It is observed that simultaneous monitoring of mass flow rate combined with any one the three parameters provides most reliable detection of kick and potential blowout likelihood. The current work presents the framework for a dynamic risk assessment and management model. Upon success testing of this approach at the pilot and field levels, this approach could provide a paradigm shift in drilling safety. - Highlights: • A novel dynamic risk model of kick detection and blowout prediction. • Testing and Validation of the risk model. • Application of the dynamic risk model.

  2. A Bayesian framework to estimate diversification rates and their variation through time and space

    Directory of Open Access Journals (Sweden)

    Silvestro Daniele

    2011-10-01

    Full Text Available Abstract Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae and Lupinus (Fabaceae. In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.

  3. Analytical methods of leakage rate estimation from a containment under a LOCA

    International Nuclear Information System (INIS)

    Chun, M.H.

    1981-01-01

    Three most outstanding maximum flow rate formulas are identified from many existing models. Outlines of the three limiting mass flow rate models are given along with computational procedures to estimate approximate amount of fission products released from a containment to environment for a given characteristic hole size for containment-isolation failure and containment pressure and temperature under a loss of coolant accident. Sample calculations are performed using the critical ideal gas flow rate model and the Moody's graphs for the maximum two-phase flow rates, and the results are compared with the values obtained from then mass leakage rate formula of CONTEMPT-LT code for converging nozzle and sonic flow. It is shown that the critical ideal gas flow rate formula gives almost comparable results as one can obtain from the Moody's model. It is also found that a more conservative approach to estimate leakage rate from a containment under a LOCA is to use the maximum ideal gas flow rate equation rather than the mass leakage rate formula of CONTEMPT-LT. (author)

  4. Estimates of outcrossing rates in Moringa oleifera using Amplified ...

    African Journals Online (AJOL)

    The mating system in plant populations is influenced by genetic and environmental factors. Proper estimates of the outcrosing rates are often required for planning breeding programmes, conservation and management of tropical trees. However, although Moringa oleifera is adapted to a mixed mating system, the proportion ...

  5. A constrained polynomial regression procedure for estimating the local False Discovery Rate

    Directory of Open Access Journals (Sweden)

    Broët Philippe

    2007-06-01

    Full Text Available Abstract Background In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR, which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR for subsets of genes. Results In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets. Conclusion A novel and efficient procedure for estimating lFDR was developed and evaluated.

  6. Estimating rate of occurrence of rare events with empirical bayes: A railway application

    International Nuclear Information System (INIS)

    Quigley, John; Bedford, Tim; Walls, Lesley

    2007-01-01

    Classical approaches to estimating the rate of occurrence of events perform poorly when data are few. Maximum likelihood estimators result in overly optimistic point estimates of zero for situations where there have been no events. Alternative empirical-based approaches have been proposed based on median estimators or non-informative prior distributions. While these alternatives offer an improvement over point estimates of zero, they can be overly conservative. Empirical Bayes procedures offer an unbiased approach through pooling data across different hazards to support stronger statistical inference. This paper considers the application of Empirical Bayes to high consequence low-frequency events, where estimates are required for risk mitigation decision support such as as low as reasonably possible. A summary of empirical Bayes methods is given and the choices of estimation procedures to obtain interval estimates are discussed. The approaches illustrated within the case study are based on the estimation of the rate of occurrence of train derailments within the UK. The usefulness of empirical Bayes within this context is discussed

  7. Individual differences in rate of encoding predict estimates of visual short-term memory capacity (K).

    Science.gov (United States)

    Jannati, Ali; McDonald, John J; Di Lollo, Vincent

    2015-06-01

    The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).

  8. Estimating the effect of a rare time-dependent treatment on the recurrent event rate.

    Science.gov (United States)

    Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E

    2018-05-30

    In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Study on the Leak Rate Estimation of SG Tubes and Residual Stress Estimation based on Plastic Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Chang, Yoon Suk; Lee, Dock Jin; Lee, Tae Rin; Choi, Shin Beom; Jeong, Jae Uk; Yeum, Seung Won [Sungkyunkwan University, Seoul (Korea, Republic of)

    2009-02-15

    In this research project, a leak rate estimation model was developed for steam generator tubes with through wall cracks. The modelling was based on the leak data from 23 tube specimens. Also, the procedure of finite element analysis was developed for residual stress calculation of dissimilar metal weld in a bottom mounted instrumentation. The effect of geometric variables related with the residual stress in penetration weld part was investigated by using the developed analysis procedure. The key subjects dealt in this research are: 1. Development of leak rate estimation model for steam generator tubes with through wall cracks 2. Development of the program which can perform the structure and leakage integrity evaluation for steam generator tubes 3. Development of analysis procedure for bottom mounted instrumentation weld residual stress 4. Analysis on the effects of geometric variables on weld residual stress It is anticipated that the technologies developed in this study are applicable for integrity estimation of steam generator tubes and weld part in NPP.

  10. Environmental isotope balance of Lake Kinneret as a tool in evaporation rate estimation

    International Nuclear Information System (INIS)

    Lewis, S.

    1979-01-01

    The balance of environmental isotopes in Lake Kinneret has been used to obtain an independent estimate of the mean monthly evaporation rate. Direct calculation was precluded by the inadequacy of the isotope data in uniquely representing the system behaviour throughout the annual cycle. The approach adopted uses an automatic algorithm to seek an objective best fit of the isotope balance model to measured oxygen-18 data by optimizing the evaporation rate as a parameter. To this end, evaporation is described as a periodic function with two parameters. The sensitivity of the evaporation rate estimates to parameter uncertainty and data errors is stressed. Error analysis puts confidence limits on the estimates obtained. Projected improvements in data collection and analysis show that a significant reduction in uncertainty can be realized. Relative to energy balance estimates, currently obtainable data result in about 30% uncertainty. The most optimistic scenario would yield about 15% relative uncertainty. (author)

  11. Parameter estimations in predictive microbiology: Statistically sound modelling of the microbial growth rate.

    Science.gov (United States)

    Akkermans, Simen; Logist, Filip; Van Impe, Jan F

    2018-04-01

    When building models to describe the effect of environmental conditions on the microbial growth rate, parameter estimations can be performed either with a one-step method, i.e., directly on the cell density measurements, or in a two-step method, i.e., via the estimated growth rates. The two-step method is often preferred due to its simplicity. The current research demonstrates that the two-step method is, however, only valid if the correct data transformation is applied and a strict experimental protocol is followed for all experiments. Based on a simulation study and a mathematical derivation, it was demonstrated that the logarithm of the growth rate should be used as a variance stabilizing transformation. Moreover, the one-step method leads to a more accurate estimation of the model parameters and a better approximation of the confidence intervals on the estimated parameters. Therefore, the one-step method is preferred and the two-step method should be avoided. Copyright © 2017. Published by Elsevier Ltd.

  12. Estimating the Effects of Exchange Rate Volatility on Export Volumes

    OpenAIRE

    Wang, Kai-Li; Barrett, Christopher B.

    2007-01-01

    This paper takes a new empirical look at the long-standing question of the effect of exchange rate volatility on international trade flows by studying the case of Taiwan's exports to the United States from 1989-1998. In particular, we employ sectoral-level, monthly data and an innovative multivariate GARCH-M estimator with corrections for leptokurtic errors. This estimator allows for the possibility that traders' forward-looking contracting behavior might condition the way in which exchange r...

  13. Estimation of time-varying growth, uptake and excretion rates from dynamic metabolomics data.

    Science.gov (United States)

    Cinquemani, Eugenio; Laroute, Valérie; Cocaign-Bousquet, Muriel; de Jong, Hidde; Ropers, Delphine

    2017-07-15

    Technological advances in metabolomics have made it possible to monitor the concentration of extracellular metabolites over time. From these data, it is possible to compute the rates of uptake and excretion of the metabolites by a growing cell population, providing precious information on the functioning of intracellular metabolism. The computation of the rate of these exchange reactions, however, is difficult to achieve in practice for a number of reasons, notably noisy measurements, correlations between the concentration profiles of the different extracellular metabolites, and discontinuties in the profiles due to sudden changes in metabolic regime. We present a method for precisely estimating time-varying uptake and excretion rates from time-series measurements of extracellular metabolite concentrations, specifically addressing all of the above issues. The estimation problem is formulated in a regularized Bayesian framework and solved by a combination of extended Kalman filtering and smoothing. The method is shown to improve upon methods based on spline smoothing of the data. Moreover, when applied to two actual datasets, the method recovers known features of overflow metabolism in Escherichia coli and Lactococcus lactis , and provides evidence for acetate uptake by L. lactis after glucose exhaustion. The results raise interesting perspectives for further work on rate estimation from measurements of intracellular metabolites. The Matlab code for the estimation method is available for download at https://team.inria.fr/ibis/rate-estimation-software/ , together with the datasets. eugenio.cinquemani@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-09-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.

  15. Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*

    Science.gov (United States)

    Cai, T. Tony; Zhang, Anru

    2016-01-01

    Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471

  16. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by

  17. Lichen forage ingestion rates of free-roaming caribou estimated with fallout cesium-137

    International Nuclear Information System (INIS)

    Hanson, W.C.; Whicker, F.W.; Lipscomb, J.F.

    1975-01-01

    Lichen forage ingestion rates of free-roaming caribou herds in northern Alaska during 1963 to 1970 were estimated by applying a two-compartment, eight parameter cesium-137 kinetics model to measured fallout 137 Cs concentrations in lichen and caribou. Estimates for winter equilibrium periods (January to April) for each year ranged from 3.7 to 6.9 kg dry weight lichens per day for adult female caribou. Further refinement of these estimations were obtained by calculating probabilistic distributions of intake rates by stochastic processes based upon the mean and standard error intervals of the eight parameters during 1965 and 1968. A computer program generated 1,000 randomly sampled values within each of the eight parameter distributions. Results substantiate the contention that lichen forage ingestion rates by free-roaming caribou are significantly greater than previously held

  18. Equations to Estimate Creatinine Excretion Rate : The CKD Epidemiology Collaboration

    NARCIS (Netherlands)

    Ix, Joachim H.; Wassel, Christina L.; Stevens, Lesley A.; Beck, Gerald J.; Froissart, Marc; Navis, Gerjan; Rodby, Roger; Torres, Vicente E.; Zhang, Yaping (Lucy); Greene, Tom; Levey, Andrew S.

    Background and objectives Creatinine excretion rate (CER) indicates timed urine collection accuracy. Although equations to estimate CER exist, their bias and precision are untested and none simultaneously include age, sex, race, and weight. Design, setting, participants, & measurements Participants

  19. An Analytical Method for Jack-Up Riser’s Fatigue Life Estimation

    Directory of Open Access Journals (Sweden)

    Fengde Wang

    2018-01-01

    Full Text Available In order to determine whether a special sea area and its sea state are available for the jack-up riser with surface blowout preventers, an analytical method is presented to estimate the jack-up riser’s wave loading fatigue life in this study. In addition, an approximate formula is derived to compute the random wave force spectrum of the small-scale structures. The results show that the response of jack-up riser is a narrow band random vibration. The infinite water depth dispersion relation between wavenumber and wave frequency can be used to calculate the wave force spectrum of small-scale structures. The riser’s response mainly consists of the additional displacement response. The fatigue life obtained by the formula proposed by Steinberg is less than that of the Bendat method.

  20. Comparative Risk Assessment of spill response options for a deepwater oil well blowout: Part 1. Oil spill modeling.

    Science.gov (United States)

    French-McCay, Deborah; Crowley, Deborah; Rowe, Jill J; Bock, Michael; Robinson, Hilary; Wenning, Richard; Walker, Ann Hayward; Joeckel, John; Nedwed, Tim J; Parkerton, Thomas F

    2018-05-31

    Oil spill model simulations of a deepwater blowout in the Gulf of Mexico De Soto Canyon, assuming no intervention and various response options (i.e., subsea dispersant injection SSDI, in addition to mechanical recovery, in-situ burning, and surface dispersant application) were compared. Predicted oil fate, amount and area of surfaced oil, and exposure concentrations in the water column above potential effects thresholds were used as inputs to a Comparative Risk Assessment to identify response strategies that minimize long-term impacts. SSDI reduced human and wildlife exposure to volatile organic compounds; dispersed oil into a large water volume at depth; enhanced biodegradation; and reduced surface water, nearshore and shoreline exposure to floating oil and entrained/dissolved oil in the upper water column. Tradeoffs included increased oil exposures at depth. However, since organisms are less abundant below 200 m, results indicate that overall exposure of valued ecosystem components was minimized by use of SSDI. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. ESR dating of elephant teeth and radiation dose rate estimation in soil

    International Nuclear Information System (INIS)

    Taisoo Chong; Ohta, Hiroyuki; Nakashima, Yoshiyuki; Iida, Takao; Saisho, Hideo

    1989-01-01

    Chemical analysis of 238 U, 232 Th and 40 K in the dentine as well as enamel of elephant tooth fossil has been carried out in order to estimate the internal absorbed dose rate of the specimens, which was estimated to be (39±4) mrad/y on the assumption of early uptake model of radionuclides. The external radiation dose rate in the soil including the contribution from cosmic rays was also estimated to be (175±18) mrad/y with the help of γ-ray spectroscopic techniques of the soil samples in which the specimens were buried. The 60 Co γ-ray equivalent accumulated dose of (2±0.2) x 10 4 rad for the tooth enamel gave ''ESR age'' of (9±2) x 10 4 y, which falls in the geologically estimated range between 3 x 10 4 and 30 x 10 4 y before the present. (author)

  2. Estimation in adults of the glomerular filtration rate in [99mTc] DTPA renography - the rate constant method

    International Nuclear Information System (INIS)

    Carlsen, Ove

    2004-01-01

    The purpose of this study was to design an alternative and robust method for estimation of glomerular filtration rate (GFR) in [ 99 mTc]-diethylenetriaminepentaacetic acid ([ 99 mTc] -DTPA renography with a reliability not significantly lower than that of the conventional Gates' method. Methods: The method is based on renographies lasting 40 min in which regions of interest (ROIs) are manually created over selected parts of certain blood pools (e.g. heart, lungs, spleen, and liver). For each ROI the corresponding time-activity curve (TAC) was generated, decay corrected and exposed to a monoexponential fit in the time interval 10 to 40 min postinjection. The rate constant in min-1 of the monoexponential fit was denoted BETA. Following an iterative procedure comprising usually 5-10 manually created ROIs, the monoexponential fit with the maximum rate constant (BETA max ) was used for estimation of GFR. Results: In a patient material of 54 adult subjects in whom GFR was determined with multiple or one sample techniques with [ 51 Cr]-ethylenediaminetetraacetic acid ([ 51 Cr]-EDTA) the regression curve of standard GFR (GFR std ) (i.e. GFR adjusted to 1.73 m 2 body surface area) showed a close, non-linear relationship with BETA max with a correlation coefficient of 95%. The standard errors of estimate (SEE) were 6.6, 10.6 and 16.8 for GFR std equal to 30, 60, and 120 ml/(min .73 m 2 ), respectively. The corresponding SEE values for almost the same patient material using Gates' method were 8.4, 11.9, and 16.8 ml/(min 1.73 m 2 ). Conclusions: The alternative rate constant method yields estimates of GFR std with SEE values equal to or slightly smaller than in Gates' method. The two methods provide statistically uncorrelated estimates of GFR std . Therefore, pooled estimates of GFR std can be calculated with SEE values approximately 1.41 times smaller than those mentioned above. The reliabilities of the pooled estimate of GFR std separately and of the multiple samples method

  3. Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.

    Science.gov (United States)

    Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk

    2017-03-01

    Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.

  4. Organic carbon sedimentation rates in Asian mangrove coastal ecosystems estimated by 210PB chronology

    International Nuclear Information System (INIS)

    Tateda, Y.; Wattayakorn, G.; Nhan, D.D.; Kasuya, Y.

    2004-01-01

    Organic carbon balance estimation of mangrove coastal ecosystem is important for understanding of Asian coastal carbon budget/flux calculation in global carbon cycle modelling which is powerful tool for the prediction of future greenhouse gas effect and evaluation of countermeasure preference. Especially, the organic carbon accumulation rate in mangrove ecosystem was reported to be important sink of carbon as well as that in boreal peat accumulation. For the estimation of 10 3 years scale organic carbon accumulation rates in mangrove coastal ecosystems, 14 C was used as long term chronological tracer, being useful in pristine mangrove forest reserve area. While in case of mangrove plantation of in coastal area, the 210 Pb is suitable for the estimation of decades scale estimation by its half-life. Though it has possibility of bio-/physical- turbation effect in applying 210 Pb chronology that is offset in case of 10 3 years scale estimation, especially in Asian mangrove ecosystem where the anthropogenic physical turbation by coastal fishery is vigorous.In this paper, we studied the organic carbon and 210 Pb accumulation rates in subtropical mangrove coastal ecosystems in Japan, Vietnam and Thailand with 7 Be analyses to make sure the negligible effect of above turbation effects on organic carbon accumulation. We finally concluded that 210 Pb was applicable to estimate organic carbon accumulation rates in these ecosystems even though the physical-/bio-turbation is expected. The measured organic carbon accumulation rates using 210 Pb in mangrove coastal ecosystems of Japan, Vietnam and Thailand were 0.067 4.0 t-C ha -1 y -1 . (author)

  5. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  6. Frequency, outcome and prognostic factors of carotid blowout syndrome after hypofractionated re-irradiation of head and neck cancer using CyberKnife: A multi-institutional study

    International Nuclear Information System (INIS)

    Yamazaki, Hideya; Ogita, Mikio; Kodani, Naohiro; Nakamura, Satoakai; Inoue, Hiroshi; Himei, Kengo; Kotsuma, Tadayuki; Yoshida, Ken; Yoshioka, Yasuo; Yamashita, Koichi; Udono, Hiroki

    2013-01-01

    Purpose: Re-irradiation has attracted attention as a potential therapy for recurrent head and neck tumors. However, carotid blowout syndrome (CBS) has become a serious complication of re-irradiation because of the associated life-threatening toxicity. Determining of the characteristics of CBS is important. We conducted a multi-institutional study. Methods and patients: Head and neck carcinoma patients (n = 381) were treated with 484 re-irradiation sessions at 7 Japanese CyberKnife institutions between 2000 and 2010. Results: Of these, 32 (8.4%) developed CBS, which proved fatal that median survival time after CBS onset was 0.1 month, and the 1-year survival rate was 37.5%. The median duration between re-irradiation and CBS onset was 5 months (range, 0–69 months). Elder age, skin invasion, and necrosis/infection were identified as statistically significant risk factors after CBS by univariate analysis. The presence of skin invasion at the time of treatment found only in postoperative case, is identified as only statistically significant prognostic factor after CBS in multivariate analysis. The 1-year survival rate for the group without skin invasion was 42%, whereas no patient with skin invasion survived more than 4 months (0% at 1 year, p = 0.0049). Conclusions: Careful attention should be paid to the occurrence of CBS if the tumor is located adjacent to the carotid artery. The presence of skin invasion at CBS onset is ominous sign of lethal consequences

  7. Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability

    Science.gov (United States)

    Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko

    In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.

  8. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  9. Estimating Effective Subsidy Rates of Student Aid Programs

    OpenAIRE

    Stacey H. CHEN

    2008-01-01

    Every year millions of high school students and their parents in the US are asked to fill out complicated financial aid application forms. However, few studies have estimated the responsiveness of government financial aid schemes to changes in financial needs of the students. This paper identifies the effective subsidy rate (ESR) of student aid, as defined by the coefficient of financial needs in the regression of financial aid. The ESR measures the proportion of subsidy of student aid under ...

  10. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    OpenAIRE

    Chae, Hyeseon; Min, Kyungdoo; Youn, kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. ...

  11. Estimating the Exchange Rate Pass-Through to Prices in Mexico

    OpenAIRE

    Josué Fernando Cortés Espada

    2013-01-01

    This paper estimates the magnitude of the exchange rate pass-through to consumer prices in Mexico. Moreover, it analyzes if the pass-through dynamics have changed in recent years. In particular, it uses a methodology that generates results consistent with the hierarchy implicit in the cpi. The results suggest that the exchange rate pass-through to the general price level is low and not statistically significant. However, the pass-through is positive and significant for goods prices. Furthermo...

  12. Estimating rates of local species extinction, colonization and turnover in animal communities

    Science.gov (United States)

    Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.

    1998-01-01

    Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.

  13. Simultaneous use of mark-recapture and radiotelemetry to estimate survival, movement, and capture rates

    Science.gov (United States)

    Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.

    2000-01-01

    Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.

  14. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  15. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  16. Annual survival rate estimate of satellite transmitter–marked eastern population greater sandhill cranes

    Science.gov (United States)

    Fronczak, David L.; Andersen, David E.; Hanna, Everett E.; Cooper, Thomas R.

    2015-01-01

    Several surveys have documented the increasing population size and geographic distribution of Eastern Population greater sandhill cranes Grus canadensis tabida since the 1960s. Sport hunting of this population of sandhill cranes started in 2012 following the provisions of the Eastern Population Sandhill Crane Management Plan. However, there are currently no published estimates of Eastern Population sandhill crane survival rate that can be used to inform harvest management. As part of two studies of Eastern Population sandhill crane migration, we deployed solar-powered global positioning system platform transmitting terminals on Eastern Population sandhill cranes (n  =  42) at key concentration areas from 2009 to 2012. We estimated an annual survival rate for Eastern Population sandhill cranes from data resulting from monitoring these cranes by using the known-fates model in the MARK program. Estimated annual survival rate for adult Eastern Population sandhill cranes was 0.950 (95% confidence interval  =  0.885–0.979) during December 2009–August 2014. All fatalities (n  =  5) occurred after spring migration in late spring and early summer. We were unable to determine cause of death for crane fatalities in our study. Our survival rate estimate will be useful when combined with other population parameters such as the population index derived from the U.S. Fish and Wildlife Service fall survey, harvest, and recruitment rates to assess the effects of harvest on population size and trend and evaluate the effectiveness of management strategies.

  17. A new approach to estimate ice dynamic rates using satellite observations in East Antarctica

    Directory of Open Access Journals (Sweden)

    B. Kallenberg

    2017-05-01

    Full Text Available Mass balance changes of the Antarctic ice sheet are of significant interest due to its sensitivity to climatic changes and its contribution to changes in global sea level. While regional climate models successfully estimate mass input due to snowfall, it remains difficult to estimate the amount of mass loss due to ice dynamic processes. It has often been assumed that changes in ice dynamic rates only need to be considered when assessing long-term ice sheet mass balance; however, 2 decades of satellite altimetry observations reveal that the Antarctic ice sheet changes unexpectedly and much more dynamically than previously expected. Despite available estimates on ice dynamic rates obtained from radar altimetry, information about ice sheet changes due to changes in the ice dynamics are still limited, especially in East Antarctica. Without understanding ice dynamic rates, it is not possible to properly assess changes in ice sheet mass balance and surface elevation or to develop ice sheet models. In this study we investigate the possibility of estimating ice sheet changes due to ice dynamic rates by removing modelled rates of surface mass balance, firn compaction, and bedrock uplift from satellite altimetry and gravity observations. With similar rates of ice discharge acquired from two different satellite missions we show that it is possible to obtain an approximation of the rate of change due to ice dynamics by combining altimetry and gravity observations. Thus, surface elevation changes due to surface mass balance, firn compaction, and ice dynamic rates can be modelled and correlated with observed elevation changes from satellite altimetry.

  18. Estimating marginal CO2 emissions rates for national electricity systems

    International Nuclear Information System (INIS)

    Hawkes, A.D.

    2010-01-01

    The carbon dioxide (CO 2 ) emissions reduction afforded by a demand-side intervention in the electricity system is typically assessed by means of an assumed grid emissions rate, which measures the CO 2 intensity of electricity not used as a result of the intervention. This emissions rate is called the 'marginal emissions factor' (MEF). Accurate estimation of MEFs is crucial for performance assessment because their application leads to decisions regarding the relative merits of CO 2 reduction strategies. This article contributes to formulating the principles by which MEFs are estimated, highlighting the strengths and weaknesses in existing approaches, and presenting an alternative based on the observed behaviour of power stations. The case of Great Britain is considered, demonstrating an MEF of 0.69 kgCO 2 /kW h for 2002-2009, with error bars at +/-10%. This value could reduce to 0.6 kgCO 2 /kW h over the next decade under planned changes to the underlying generation mix, and could further reduce to approximately 0.51 kgCO 2 /kW h before 2025 if all power stations commissioned pre-1970 are replaced by their modern counterparts. Given that these rates are higher than commonly applied system-average or assumed 'long term marginal' emissions rates, it is concluded that maintenance of an improved understanding of MEFs is valuable to better inform policy decisions.

  19. Constrained least squares methods for estimating reaction rate constants from spectroscopic data

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H.F.M.; Hoefsloot, H.C.J.; Smilde, A.K.

    2002-01-01

    Model errors, experimental errors and instrumental noise influence the accuracy of reaction rate constant estimates obtained from spectral data recorded in time during a chemical reaction. In order to improve the accuracy, which can be divided into the precision and bias of reaction rate constant

  20. Rates of convergence and asymptotic normality of curve estimators for ergodic diffusion processes

    NARCIS (Netherlands)

    J.H. van Zanten (Harry)

    2000-01-01

    textabstractFor ergodic diffusion processes, we study kernel-type estimators for the invariant density, its derivatives and the drift function. We determine rates of convergence and find the joint asymptotic distribution of the estimators at different points.

  1. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  2. Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.

    Science.gov (United States)

    Du, P; Walling, D E

    2012-01-01

    Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by

  3. Nuclear reactor component populations, reliability data bases, and their relationship to failure rate estimation and uncertainty analysis

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.

    1981-12-01

    Probabilistic risk analyses are used to assess the risks inherent in the operation of existing and proposed nuclear power reactors. In performing such risk analyses the failure rates of various components which are used in a variety of reactor systems must be estimated. These failure rate estimates serve as input to fault trees and event trees used in the analyses. Component failure rate estimation is often based on relevant field failure data from different reliability data sources such as LERs, NPRDS, and the In-Plant Data Program. Various statistical data analysis and estimation methods have been proposed over the years to provide the required estimates of the component failure rates. This report discusses the basis and extent to which statistical methods can be used to obtain component failure rate estimates. The report is expository in nature and focuses on the general philosophical basis for such statistical methods. Various terms and concepts are defined and illustrated by means of numerous simple examples

  4. Estimation of the rate of energy production of rat mast cells in vitro

    DEFF Research Database (Denmark)

    Johansen, Torben

    1983-01-01

    Rat mast cells were treated with glycolytic and respiratory inhibitors. The rate of adenosine triphosphate depletion of cells incubated with both types of inhibitors and the rate of lactate produced in presence of antimycin A and glucose were used to estimate the rate of oxidative and glycolytic...

  5. Neural estimation of kinetic rate constants from dynamic PET-scans

    DEFF Research Database (Denmark)

    Fog, Torben L.; Nielsen, Lars Hupfeldt; Hansen, Lars Kai

    1994-01-01

    A feedforward neural net is trained to invert a simple three compartment model describing the tracer kinetics involved in the metabolism of [18F]fluorodeoxyglucose in the human brain. The network can estimate rate constants from positron emission tomography sequences and is about 50 times faster ...

  6. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2002-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent

  7. Fetal QRS detection and heart rate estimation: a wavelet-based approach

    International Nuclear Information System (INIS)

    Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João

    2014-01-01

    Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)

  8. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  9. On researching erosion-corrosion wear in pipelines: the rate and residual lifetime estimation

    International Nuclear Information System (INIS)

    Baranenko, V.I.; Yanchenko, Yu.A.; Gulina, O.M.; Dokukin, D.A.

    2010-01-01

    To base the normative document on calculation of pipelines erosive-corrosive wear (ECW) rate and residual lifetime this research of ECW regularities for pearlitic steel NPP pipelines was performed. The estimates of control data treatment statistical procedures efficiency were presented. The influence of the scheme of piping control on the ECW rate and residual lifetime estimation results was demonstrated. The simplified scheme is valid only in case of complete information. It's usage under data uncertainties leads to essential residual lifetime overstating [ru

  10. Three-Axis Attitude Estimation With a High-Bandwidth Angular Rate Sensor

    Science.gov (United States)

    Bayard, David S.; Green, Joseph J.

    2013-01-01

    A continuing challenge for modern instrument pointing control systems is to meet the increasingly stringent pointing performance requirements imposed by emerging advanced scientific, defense, and civilian payloads. Instruments such as adaptive optics telescopes, space interferometers, and optical communications make unprecedented demands on precision pointing capabilities. A cost-effective method was developed for increasing the pointing performance for this class of NASA applications. The solution was to develop an attitude estimator that fuses star tracker and gyro measurements with a high-bandwidth angular rotation sensor (ARS). An ARS is a rate sensor whose bandwidth extends well beyond that of the gyro, typically up to 1,000 Hz or higher. The most promising ARS sensor technology is based on a magnetohydrodynamic concept, and has recently become available commercially. The key idea is that the sensor fusion of the star tracker, gyro, and ARS provides a high-bandwidth attitude estimate suitable for supporting pointing control with a fast-steering mirror or other type of tip/tilt correction for increased performance. The ARS is relatively inexpensive and can be bolted directly next to the gyro and star tracker on the spacecraft bus. The high-bandwidth attitude estimator fuses an ARS sensor with a standard three-axis suite comprised of a gyro and star tracker. The estimation architecture is based on a dual-complementary filter (DCF) structure. The DCF takes a frequency- weighted combination of the sensors such that each sensor is most heavily weighted in a frequency region where it has the lowest noise. An important property of the DCF is that it avoids the need to model disturbance torques in the filter mechanization. This is important because the disturbance torques are generally not known in applications. This property represents an advantage over the prior art because it overcomes a weakness of the Kalman filter that arises when fusing more than one rate

  11. Estimation of component failure rates for PSA on nuclear power plants 1982-1997

    International Nuclear Information System (INIS)

    Kirimoto, Yukihiro; Matsuzaki, Akihiro; Sasaki, Atsushi

    2001-01-01

    Probabilistic safety assessment (PSA) on nuclear power plants has been studied for many years by the Japanese industry. The PSA methodology has been improved so that PSAs for all commercial LWRs were performed and used to examine for accident management.On the other hand, most data of component failure rates in these PSAs were acquired from U.S. databases. Nuclear Information Center (NIC) of Central Research Institute of Electric Power Industry (CRIEPI) serves utilities by providing safety- , and reliability-related information on operation and maintenance of the nuclear power plants, and by evaluating the plant performance and incident trends. So, NIC started a research study on estimating the major component failure rates at the request of the utilities in 1988. As a result, we estimated the hourly-failure rates of 47 component types and the demand-failure rates of 15 component types. The set of domestic component reliability data from 1982 to 1991 for 34 LWRs has been evaluated by a group of PSA experts in Japan at the Nuclear Safety Research Association (NSRA) in 1995 and 1996, and the evaluation report was issued in March 1997. This document describes the revised component failure rate calculated by our re-estimation on 49 Japanese LWRs from 1982 to 1997. (author)

  12. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  13. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  14. Direct estimates of unemployment rate and capacity utilization in macroeconometric models

    Energy Technology Data Exchange (ETDEWEB)

    Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V

    1979-10-01

    The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.

  15. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2001-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this

  16. Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer

    Directory of Open Access Journals (Sweden)

    Xiaokun Liu

    2016-04-01

    Full Text Available A gyrowheel (GW is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper.

  17. Optimization of hierarchical 3DRS motion estimators for picture rate conversion

    OpenAIRE

    Heinrich, A.; Bartels, C.L.L.; Vleuten, van der, R.J.; Cordes, C.N.; Haan, de, G.

    2010-01-01

    There is a continuous pressure to lower the implementation complexity and improve the quality of motion-compensated picture rate conversion methods. Since the concept of hierarchy can be advantageously applied to many motion estimation methods, we have extended and improved the current state-of-the-art motion estimation method in this field, 3-Dimensional Recursive Search (3DRS), with this concept. We have explored the extensive parameter space and present an analysis of the importance and in...

  18. Estimation of mortality rates in stage-structured population

    CERN Document Server

    Wood, Simon N

    1991-01-01

    The stated aims of the Lecture Notes in Biomathematics allow for work that is "unfinished or tentative". This volume is offered in that spirit. The problem addressed is one of the classics of statistical ecology, the estimation of mortality rates from stage-frequency data, but in tackling it we found ourselves making use of ideas and techniques very different from those we expected to use, and in which we had no previous experience. Specifically we drifted towards consideration of some rather specific curve and surface fitting and smoothing techniques. We think we have made some progress (otherwise why publish?), but are acutely aware of the conceptual and statistical clumsiness of parts of the work. Readers with sufficient expertise to be offended should regard the monograph as a challenge to do better. The central theme in this book is a somewhat complex algorithm for mortality estimation (detailed at the end of Chapter 4). Because of its complexity, the job of implementing the method is intimidating. Any r...

  19. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    International Nuclear Information System (INIS)

    Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.

    2008-01-01

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study

  20. Cluster-collision frequency. II. Estimation of the collision rate

    International Nuclear Information System (INIS)

    Amadon, A.S.; Marlow, W.H.

    1991-01-01

    Gas-phase cluster-collision rates, including effects of cluster morphology and long-range intermolecular forces, are calculated. Identical pairs of icosahedral or dodecahedral carbon tetrachloride clusters of 13, 33, and 55 molecules in two different relative orientations were discussed in the preceding paper [Phys. Rev. A 43, 5483 (1991)]: long-range interaction energies were derived based upon (i) exact calculations of the iterated, or many-body, induced-dipole interaction energies for the clusters in two fixed relative orientations; and (ii) bulk, or continuum descriptions (Lifshitz--van der Waals theory), of spheres of corresponding masses and diameters. In this paper, collision rates are calculated according to an exact description of the rates for small spheres interacting via realistic potentials. Utilizing the interaction energies of the preceding paper, several estimates of the collision rates are given by treating the discrete clusters in fixed relative orientations, by computing rotationally averaged potentials for the discrete clusters, and by approximating the clusters as continuum spheres. For the discrete, highly symmetric clusters treated here, the rates using the rotationally averaged potentials closely approximate the fixed-orientation rates and the values of the intercluster potentials for cluster surface separations under 2 A have negligible effect on the overall collision rates. While the 13-molecule cluster-collision rate differs by 50% from the rate calculated as if the cluster were bulk matter, the two larger cluster-collision rates differ by less than 15% from the macroscopic rates, thereby indicating the transition of microscopic to macroscopic behavior

  1. Estimation of evaporation rates over the Arabian Sea from Satellite data

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, M.V.; RameshBabu, V.; Rao, L.V.G.; Sastry, J.S.

    Utilizing both the SAMIR brightness temperatures of Bhaskara 2 and GOSSTCOMP charts of NOAA satellite series, the evaporation rates over the Arabian Sea for June 1982 are estimated through the bulk aerodynamic method. The spatial distribution...

  2. Estimation of leak rate through circumferential cracks in pipes in nuclear power plants

    Directory of Open Access Journals (Sweden)

    Jai Hak Park

    2015-04-01

    Full Text Available The leak before break (LBB concept is widely used in designing pipe lines in nuclear power plants. According to the concept, the amount of leaking liquid from a pipe should be more than the minimum detectable leak rate of a leak detection system before catastrophic failure occurs. Therefore, accurate estimation of the leak rate is important to evaluate the validity of the LBB concept in pipe line design. In this paper, a program was developed to estimate the leak rate through circumferential cracks in pipes in nuclear power plants using the Henry–Fauske flow model and modified Henry–Fauske flow model. By using the developed program, the leak rate was calculated for a circumferential crack in a sample pipe, and the effect of the flow model on the leak rate was examined. Treating the crack morphology parameters as random variables, the statistical behavior of the leak rate was also examined. As a result, it was found that the crack morphology parameters have a strong effect on the leak rate and the statistical behavior of the leak rate can be simulated using normally distributed crack morphology parameters.

  3. Estimation of evapotranspiration rate in irrigated lands using stable isotopes

    Science.gov (United States)

    Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg

    2013-04-01

    Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.

  4. Dose rate estimation of the Tohoku hynobiid salamander, Hynobius lichenatus, in Fukushima.

    Science.gov (United States)

    Fuma, Shoichi; Ihara, Sadao; Kawaguchi, Isao; Ishikawa, Takahiro; Watanabe, Yoshito; Kubota, Yoshihisa; Sato, Youji; Takahashi, Hiroyuki; Aono, Tatsuo; Ishii, Nobuyoshi; Soeda, Haruhi; Matsui, Kumi; Une, Yumi; Minamiya, Yukio; Yoshida, Satoshi

    2015-05-01

    The radiological risks to the Tohoku hynobiid salamanders (class Amphibia), Hynobius lichenatus due to the Fukushima Dai-ichi Nuclear Power Plant accident were assessed in Fukushima Prefecture, including evacuation areas. Aquatic egg clutches (n = 1 for each sampling date and site; n = 4 in total), overwintering larvae (n = 1-5 for each sampling date and site; n = 17 in total), and terrestrial juveniles or adults (n = 1 or 3 for each sampling date and site; n = 12 in total) of H. lichenatus were collected from the end of April 2011 to April 2013. Environmental media such as litter (n = 1-5 for each sampling date and site; n = 30 in total), soil (n = 1-8 for each sampling date and site; n = 31 in total), water (n = 1 for each sampling date and site; n = 17 in total), and sediment (n = 1 for each sampling date and site; n = 17 in total) were also collected. Activity concentrations of (134)Cs + (137)Cs were 1.9-2800, 0.13-320, and 0.51-220 kBq (dry kg) (-1) in the litter, soil, and sediment samples, respectively, and were 0.31-220 and <0.29-40 kBq (wet kg)(-1) in the adult and larval salamanders, respectively. External and internal absorbed dose rates to H. lichenatus were calculated from these activity concentration data, using the ERICA Assessment Tool methodology. External dose rates were also measured in situ with glass dosimeters. There was agreement within a factor of 2 between the calculated and measured external dose rates. In the most severely contaminated habitat of this salamander, a northern part of Abukuma Mountains, the highest total dose rates were estimated to be 50 and 15 μGy h(-1) for the adults and overwintering larvae, respectively. Growth and survival of H. lichenatus was not affected at a dose rate of up to 490 μGy h(-1) in the previous laboratory chronic gamma-irradiation experiment, and thus growth and survival of this salamander would not be affected, even in the most severely contaminated habitat in Fukushima Prefecture. However, further

  5. Estimated glomerular filtration rate in patients with type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Paula Caitano Fontela

    2014-12-01

    Full Text Available Objective: to estimate the glomerular filtration using the Cockcroft-Gault (CG, Modification of Diet in Renal Disease (MDRD, and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equations, and serum creatinine in the screening of reduced renal function in patients with type two diabetes (T2DM enrolled in the Family Health Strategy (ESF, Brazilian federal health-care program. Methods: a cross-sectional descriptive and analytical study was conducted. The protocol consisted of sociodemographics, physical examination and biochemical tests. Renal function was analyzed through serum creatinine and glomerular filtration rate (GFR estimated according to the CG, MDRD and CKD-EPI equations, available on the websites of the Brazilian Nephrology Society (SBN and the (NKF. Results: 146 patients aged 60.9±8.9 years were evaluated; 64.4% were women. The prevalence of serum creatinine >1.2 mg/dL was 18.5% and GFR <60 mL/min/1.73m2 totaled 25.3, 36.3 and 34.2% when evaluated by the equations CG, MDRD and CKD-EPI, respectively. Diabetic patients with reduced renal function were older, had long-term T2DM diagnosis, higher systolic blood pressure and higher levels of fasting glucose, compared to diabetics with normal renal function. Creatinine showed strong negative correlation with the glomerular filtration rate estimated using CG, MDRD and CKD-EPI (-0.64, -0.87, -0.89 equations, respectively. Conclusion: the prevalence of individuals with reduced renal function based on serum creatinine was lower, reinforcing the need to follow the recommendations of the SBN and the National Kidney Disease Education Program (NKDEP in estimating the value of the glomerular filtration rate as a complement to the results of serum creatinine to better assess the renal function of patients.

  6. Estimating fish swimming metrics and metabolic rates with accelerometers: the influence of sampling frequency.

    Science.gov (United States)

    Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J

    2018-06-21

    Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Estimates of Annual Soil Loss Rates in the State of São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    Grasiela de Oliveira Rodrigues Medeiros

    Full Text Available ABSTRACT: Soil is a natural resource that has been affected by human pressures beyond its renewal capacity. For this reason, large agricultural areas that were productive have been abandoned due to soil degradation, mainly caused by the erosion process. The objective of this study was to apply the Universal Soil Loss Equation to generate more recent estimates of soil loss rates for the state of São Paulo using a database with information from medium resolution (30 m. The results showed that many areas of the state have high (critical levels of soil degradation due to the predominance of consolidated human activities, especially in growing sugarcane and pasture use. The average estimated rate of soil loss is 30 Mg ha-1 yr-1 and 59 % of the area of the state (except for water bodies and urban areas had estimated rates above 12 Mg ha-1 yr-1, considered as the average tolerance limit in the literature. The average rates of soil loss in areas with annual agricultural crops, semi-perennial agricultural crops (sugarcane, and permanent agricultural crops were 118, 78, and 38 Mg ha-1 yr-1 respectively. The state of São Paulo requires attention to conservation of soil resources, since most soils led to estimates beyond the tolerance limit.

  8. A comparison of small-area hospitalisation rates, estimated morbidity and hospital access.

    Science.gov (United States)

    Shulman, H; Birkin, M; Clarke, G P

    2015-11-01

    Published data on hospitalisation rates tend to reveal marked spatial variations within a city or region. Such variations may simply reflect corresponding variations in need at the small-area level. However, they might also be a consequence of poorer accessibility to medical facilities for certain communities within the region. To help answer this question it is important to compare these variable hospitalisation rates with small-area estimates of need. This paper first maps hospitalisation rates at the small-area level across the region of Yorkshire in the UK to show the spatial variations present. Then the Health Survey of England is used to explore the characteristics of persons with heart disease, using chi-square and logistic regression analysis. Using the most significant variables from this analysis the authors build a spatial microsimulation model of morbidity for heart disease for the Yorkshire region. We then compare these estimates of need with the patterns of hospitalisation rates seen across the region. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  9. Estimating an exchange rate between the EQ-5D-3L and ASCOT.

    Science.gov (United States)

    Stevens, Katherine; Brazier, John; Rowen, Donna

    2018-06-01

    The aim was to estimate an exchange rate between EQ-5D-3L and the Adult Social Care Outcome Tool (ASCOT) using preference-based mapping via common time trade-off (TTO) valuations. EQ-5D and ASCOT are useful for examining cost-effectiveness within the health and social care sectors, respectively, but there is a policy need to understand overall benefits and compare across sectors to assess relative value for money. Standard statistical mapping is unsuitable since it relies on conceptual overlap of the measures but EQ-5D and ASCOT have different conceptualisations of quality of life. We use a preference-based mapping approach to estimate the exchange rate using common TTO valuations for both measures. A sample of health states from each measure was valued using TTO by 200 members of the UK adult general population. Regression analyses are used to generate separate equations between EQ-5D-3L and ASCOT values using their original value set and TTO values elicited here. These are solved as simultaneous equations to estimate the relationship between EQ-5D-3L and ASCOT. The relationship for moving from ASCOT to EQ-5D-3L is a linear transformation with an intercept of -0.0488 and gradient of 0.978. This enables QALY gains generated by ASCOT and EQ-5D to be compared across different interventions. This paper estimated an exchange rate between ASCOT and EQ-5D-3L using a preference-based mapping approach that does not compromise the descriptive systems of the two measures. This contributes to the development of preference-based mapping through the use of TTO as the common metric used to estimate the exchange rate between measures.

  10. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  11. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  12. Using administrative data to estimate graduation rates: Challenges, Proposed solutions and their pitfalls.

    Directory of Open Access Journals (Sweden)

    Joydeep Roy

    2008-06-01

    Full Text Available In recent years there has been a renewed interest in understanding the levels and trends in high school graduation in the U.S. A big and influential literature has argued that the “true” high school graduation rate remains at an unsatisfactory level, and that the graduation rates for minorities (Blacks and Hispanics are alarmingly low. In this paper we take a closer look at the different measures of high school graduation which have recently been proposed and which yield such low estimates of graduation rates. We argue that the nature of the variables in the Common Core of Data, the dataset maintained by the U.S. Department of Education that is the main source for all of the new measures, requires caution in calculating graduation rates, and the adjustments that have been proposed often impart significant downward bias to the estimates.

  13. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  14. Estimation of permafrost thawing rates in a sub-arctic catchment using recession flow analysis

    Directory of Open Access Journals (Sweden)

    S. W. Lyon

    2009-05-01

    Full Text Available Permafrost thawing is likely to change the flow pathways taken by water as it moves through arctic and sub-arctic landscapes. The location and distribution of these pathways directly influence the carbon and other biogeochemical cycling in northern latitude catchments. While permafrost thawing due to climate change has been observed in the arctic and sub-arctic, direct observations of permafrost depth are difficult to perform at scales larger than a local scale. Using recession flow analysis, it may be possible to detect and estimate the rate of permafrost thawing based on a long-term streamflow record. We demonstrate the application of this approach to the sub-arctic Abiskojokken catchment in northern Sweden. Based on recession flow analysis, we estimate that permafrost in this catchment may be thawing at an average rate of about 0.9 cm/yr during the past 90 years. This estimated thawing rate is consistent with direct observations of permafrost thawing rates, ranging from 0.7 to 1.3 cm/yr over the past 30 years in the region.

  15. Commercial Discount Rate Estimation for Efficiency Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-04-13

    Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).

  16. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  17. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  18. Estimation of Anaerobic Debromination Rate Constants of PBDE Pathways Using an Anaerobic Dehalogenation Model.

    Science.gov (United States)

    Karakas, Filiz; Imamoglu, Ipek

    2017-04-01

    This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.

  19. Estimation of physical work load by statistical analysis of the heart rate in a conveyor-belt worker.

    Science.gov (United States)

    Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J

    1994-12-01

    Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.

  20. Optimization of hierarchical 3DRS motion estimators for picture rate conversion

    NARCIS (Netherlands)

    Heinrich, A.; Bartels, C.L.L.; Vleuten, van der R.J.; Cordes, C.N.; Haan, de G.

    2010-01-01

    There is a continuous pressure to lower the implementation complexity and improve the quality of motion-compensated picture rate conversion methods. Since the concept of hierarchy can be advantageously applied to many motion estimation methods, we have extended and improved the current

  1. Estimation of the prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia

    Science.gov (United States)

    Meza, Benjamin P.L.; Lohrke, Britta; Wilkinson, Robert; Pitman, John P.; Shiraishi, Ray W.; Bock, Naomi; Lowrance, David W.; Kuehnert, Matthew J.; Mataranyika, Mary; Basavaraju, Sridhar V.

    2014-01-01

    Background Acute transfusion reactions are probably common in sub-Saharan Africa, but transfusion reaction surveillance systems have not been widely established. In 2008, the Blood Transfusion Service of Namibia implemented a national acute transfusion reaction surveillance system, but substantial under-reporting was suspected. We estimated the actual prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia. Methods The percentage of transfusion events resulting in a reported acute transfusion reaction was calculated. Actual percentage and rates of acute transfusion reactions per 1,000 transfused units were estimated by reviewing patients’ records from six hospitals, which transfuse >99% of all blood in Windhoek. Patients’ records for 1,162 transfusion events occurring between 1st January – 31st December 2011 were randomly selected. Clinical and demographic information were abstracted and Centers for Disease Control and Prevention National Healthcare Safety Network criteria were applied to categorize acute transfusion reactions1. Results From January 1 – December 31, 2011, there were 3,697 transfusion events (involving 10,338 blood units) in the selected hospitals. Eight (0.2%) acute transfusion reactions were reported to the surveillance system. Of the 1,162 transfusion events selected, medical records for 785 transfusion events were analysed, and 28 acute transfusion reactions were detected, of which only one had also been reported to the surveillance system. An estimated 3.4% (95% confidence interval [CI]: 2.3–4.4) of transfusion events in Windhoek resulted in an acute transfusion reaction, with an estimated rate of 11.5 (95% CI: 7.6–14.5) acute transfusion reactions per 1,000 transfused units. Conclusion The estimated actual rate of acute transfusion reactions is higher than the rate reported to the national haemovigilance system. Improved surveillance and interventions to reduce transfusion-related morbidity and mortality

  2. In search of thermogenic methane in groundwater in the Netherlands, with emphasis on the location of a historic gas well blowout

    Science.gov (United States)

    Schout, G.; Griffioen, J.; Hassanizadeh, S. M.; Hartog, N.

    2017-12-01

    Similar to the US, the Netherlands has a long history of oil & gas production, with around 2500 onshore hydrocarbon wells drilled since the late 1930s. While conventional reserves are diminishing, a governmental moratorium was put in place on shale gas exploration and production until 2023, in part due to concerns about its effects on groundwater quality. To investigate the industry's historic and potential future impact on groundwater quality in the country, a study was carried out to assess i) baseline methane concentrations and origin ii) the natural connectivity of deeper gas-bearing layers with the shallower groundwater systems. Through datamining, a dataset consisting of 12,200 groundwater analyses with methane concentrations was assembled. Furthermore, 25 additional samples were collected at targeted locations and analysed for dissolved gas molecular and isotopic composition. Methane concentrations are positively skewed with median, mean and maximum concentrations of 0.28, 2.17 and 120 mg/L, respectively. No correlation between methane concentrations and distance to hydrocarbon wells or faults is observed. In general, concentrations cannot be readily explained by factors such as the depth, geographic location, host formation and depositional environment. Thermogenic methane was first encountered at several hundred meters depth, below thick successions of marine Paleogene and Neogene clays that are present throughout the country and impede vertical flow. All methane encountered above these formations was found to be biogenic in origin, with one notable exception - a sample taken at the site of a catastrophic gas well blowout that occurred in 1965 near the village of Sleen. Combined, these findings suggest that thermogenic methane does not naturally occur in Dutch shallow groundwater and its presence can be used as an indicator of anthropogenic gas leakage. The unique Sleen blowout site was selected for a detailed investigation of the long-term effects of

  3. A quasi-independence model to estimate failure rates

    International Nuclear Information System (INIS)

    Colombo, A.G.

    1988-01-01

    The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)

  4. Estimating glomerular filtration rate in a population-based study

    Directory of Open Access Journals (Sweden)

    Anoop Shankar

    2010-07-01

    Full Text Available Anoop Shankar1, Kristine E Lee2, Barbara EK Klein2, Paul Muntner3, Peter C Brazy4, Karen J Cruickshanks2,5, F Javier Nieto5, Lorraine G Danforth2, Carla R Schubert2,5, Michael Y Tsai6, Ronald Klein21Department of Community Medicine, West Virginia University School of Medicine, Morgantown, WV, USA; 2Department of Ophthalmology and Visual Sciences, 4Department of Medicine, 5Department of Population Health Sciences, University of Wisconsin, School of Medicine and Public Health, Madison, WI, USA; 3Department of Community Medicine, Mount Sinai School of Medicine, NY, USA; 6Department of Laboratory Medicine and Pathology, University of Minnesota, Minneapolis, MN, USABackground: Glomerular filtration rate (GFR-estimating equations are used to determine the prevalence of chronic kidney disease (CKD in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample.Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women. We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD equation, Cockcroft–Gault (CG equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L.Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1

  5. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    Science.gov (United States)

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  6. ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS SEDIMENT EROSION RATE USING NUMERICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Kátia Kellem Rosa

    2013-09-01

    Full Text Available Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands. This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the last decades. In this work, we examine basal sediment evacuation mechanisms by runoff and analyze glacial erosion processes occurring during subglacial transport. The glacial erosion rate at Wanda Glacier, estimated using a numerical model that consider sediment evacuated to outlet streams, ice flow velocity, ice thickness and glacier area, is 1.1 ton m yr-1.

  7. A CONSISTENT ESTIMATE FOR THE IMPACT OF SINGAPORE'S EXCHANGE RATE ON COMPETITIVENESS

    OpenAIRE

    JANG PING THIA

    2010-01-01

    Services form a larger part of the Singapore economy. However, it is difficult to analyze the exchange rate impact on services due to the lack of price data. Regression of output or export on exchange rate, while highly intuitive, is likely to suffer from the endogeneity problem since Singapore's exchange rate is used as a counter-cyclical policy tool. This results in inconsistent estimates. I propose a novel approach to overcome these limitations by using Hong Kong as a control for Singapore...

  8. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  9. Delayed Development of Brain Abscesses Following Stent-Graft Placement in a Head and Neck Cancer Patient Presenting with Carotid Blowout Syndrome

    International Nuclear Information System (INIS)

    Oweis, Yaseen; Gemmete, Joseph J.; Chaudhary, Neeraj; Pandey, Aditya; Ansari, Sameer

    2011-01-01

    We describe the delayed development of intracranial abscesses following emergent treatment with a covered stent-graft for carotid blowout syndrome (CBS) in a patient with head and neck cancer. The patient presented with hemoptysis and frank arterial bleeding through the tracheostomy site. A self-expandable stent-graft was deployed across a small pseudoaneurysm arising from the right common carotid artery (RCCA) and resulted in immediate hemostasis. Three months later, the patient suffered a recurrent hemorrhage. CT of the neck demonstrated periluminal fluid around the caudal aspect of the stent-graft with intraluminal thrombus and a small pseudoaneurysm. Subsequently, the patient underwent a balloon test occlusion study and endovascular sacrifice of the RCCA and right internal carotid artery. MRI of the brain demonstrated at least four ring-enhancing lesions within the right cerebral hemisphere consistent with intracranial abscesses that resolved with broad-spectrum antibiotic coverage.

  10. A Model-Based Bayesian Estimation of the Rate of Evolution of VNTR Loci in Mycobacterium tuberculosis

    Science.gov (United States)

    Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.

    2012-01-01

    Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563

  11. Incorporation of radiometric tracers in peat and implications for estimating accumulation rates

    Energy Technology Data Exchange (ETDEWEB)

    Hansson, Sophia V., E-mail: sophia.hansson@emg.umu.se [Department of Ecology and Environmental Science, Umeå University, SE-901 87 Umeå (Sweden); Kaste, James M. [Geology Department, The College of William and Mary, Williamsburg, VA 23187 (United States); Olid, Carolina; Bindler, Richard [Department of Ecology and Environmental Science, Umeå University, SE-901 87 Umeå (Sweden)

    2014-09-15

    Accurate dating of peat accumulation is essential for quantitatively reconstructing past changes in atmospheric metal deposition and carbon burial. By analyzing fallout radionuclides {sup 210}Pb, {sup 137}Cs, {sup 241}Am, and {sup 7}Be, and total Pb and Hg in 5 cores from two Swedish peatlands we addressed the consequence of estimating accumulation rates due to downwashing of atmospherically supplied elements within peat. The detection of {sup 7}Be down to 18–20 cm for some cores, and the broad vertical distribution of {sup 241}Am without a well-defined peak, suggest some downward transport by percolating rainwater and smearing of atmospherically deposited elements in the uppermost peat layers. Application of the CRS age–depth model leads to unrealistic peat mass accumulation rates (400–600 g m{sup −2} yr{sup −1}), and inaccurate estimates of past Pb and Hg deposition rates and trends, based on comparisons to deposition monitoring data (forest moss biomonitoring and wet deposition). After applying a newly proposed IP-CRS model that assumes a potential downward transport of {sup 210}Pb through the uppermost peat layers, recent peat accumulation rates (200–300 g m{sup −2} yr{sup −1}) comparable to published values were obtained. Furthermore, the rates and temporal trends in Pb and Hg accumulation correspond more closely to monitoring data, although some off-set is still evident. We suggest that downwashing can be successfully traced using {sup 7}Be, and if this information is incorporated into age–depth models, better calibration of peat records with monitoring data and better quantitative estimates of peat accumulation and past deposition are possible, although more work is needed to characterize how downwashing may vary between seasons or years. - Highlights: • {sup 210}Pb, {sup 137}Cs, {sup 241}Am and {sup 7}Be, and tot-Pb and tot Hg were measured in 5 peat cores. • Two age–depth models were applied resulting in different accumulation rates

  12. Increasing fMRI sampling rate improves Granger causality estimates.

    Directory of Open Access Journals (Sweden)

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  13. Rating curve estimation using Envisat virtual stations on the main Orinoco river

    Directory of Open Access Journals (Sweden)

    Juan León

    2011-09-01

    Full Text Available Rating curve estimation (height-stream relation made by hydrometric stations representing cross-sections of a river is one of hydrometrics’ fundamental tasks due to the fact that it leads to deducing a river’s average daily flow on that particular section. This information is fundamental in any attempt at hydrological modelling. However, the number of hydrological control stations monitoring large hydrological basins has been reduced worldwide. Space hydrology studies during the last five years have shown that satellite radar altimetry means that hydrological monitoring networks’ available information can be densified due to the introduction of so-called virtual stations and the joint use of such information along with in-situ measured flow records for estimating expenditure curves at these stations. This study presents the rating curves for 4 Envisat virtual stations located on the main stream of the Orinoco River. Virtual stations’ flows were estimated by using the Muskingum- Cunge 1D model. There was less than 1% error between measured and estimated flows. The methodology led to reducing average zero flow depth; in this case, it led to depths ranging from 11 to 20 meters being found along the 130 km of the Orinoco River represented by the virtual stations being considered.

  14. Estimation of age-specific rates of reactivation and immune boosting of the varicella zoster virus

    Directory of Open Access Journals (Sweden)

    Isabella Marinelli

    2017-06-01

    Full Text Available Studies into the impact of vaccination against the varicella zoster virus (VZV have increasingly focused on herpes zoster (HZ, which is believed to be increasing in vaccinated populations with decreasing infection pressure. This idea can be traced back to Hope-Simpson's hypothesis, in which a person's immune status determines the likelihood that he/she will develop HZ. Immunity decreases over time, and can be boosted by contact with a person experiencing varicella (exogenous boosting or by a reactivation attempt of the virus (endogenous boosting. Here we use transmission models to estimate age-specific rates of reactivation and immune boosting, exogenous as well as endogenous, using zoster incidence data from the Netherlands (2002–2011, n = 7026. The boosting and reactivation rates are estimated with splines, enabling these quantities to be optimally informed by the data. The analyses show that models with high levels of exogenous boosting and estimated or zero endogenous boosting, constant rate of loss of immunity, and reactivation rate increasing with age (to more than 5% per year in the elderly give the best fit to the data. Estimates of the rates of immune boosting and reactivation are strongly correlated. This has important implications as these parameters determine the fraction of the population with waned immunity. We conclude that independent evidence on rates of immune boosting and reactivation in persons with waned immunity are needed to robustly predict the impact of varicella vaccination on the incidence of HZ.

  15. Modeling and estimating the jump risk of exchange rates: Applications to RMB

    Science.gov (United States)

    Wang, Yiming; Tong, Hanfei

    2008-11-01

    In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.

  16. Estimated sedimentation rate by radionuclide techniques at Lam Phra Phloeng dam, Northeastern of Thailand

    International Nuclear Information System (INIS)

    Sasimonton Moungsrijun; Kanitha Srisuksawad; Kosit Lorsirirat; Tuangrak Nantawisarakul

    2009-01-01

    The Lam Phra Phloeng dam is located in Nakhon Ratchasima province, northeastern of Thailand. Since it was constructed in 1963, the dam is under severe reduction of its water storage capacity caused by deforestation to agricultural land at the upper catchment. Sediment cores were collected using a gravity corer. Sedimentation rates were estimated from the vertical distribution of unsupported Pb-210 in sediment cores. Total Pb-210 was determined by measuring Po-210 activities. The Po-210 and Ra-226 activities were used to determine the rate of sediment by using alpha and gamma spectrometry. The sedimentation rate was estimated using the Constant Initial Concentration model (CIC), the sedimentation rate crest dam 0.265 gcm -2 y -1 and the upstream 0.213 gcm -2 y -1 (Author)

  17. estimated glomerular filtration rate and risk of survival in acute stroke

    African Journals Online (AJOL)

    2014-03-03

    Mar 3, 2014 ... ESTIMATED GLOMERULAR FILTRATION RATE AND RISK OF SURVIVAL IN ACUTE STROKE. E. I. Okaka, MBBS, FWACP, F. A. Imarhiagbe, MBChB, FMCP, F. E. Odiase, MBBS, FMCP, O. C. A. Okoye, MBBS, FWACP,. Department of Medicine, University of Benin Teaching Hospital, Benin City, Nigeria.

  18. On the estimate of the rate constant in the homogeneous dissolution model

    Czech Academy of Sciences Publication Activity Database

    Čupera, Jakub; Lánský, Petr

    2013-01-01

    Roč. 39, č. 10 (2013), s. 1555-1561 ISSN 0363-9045 Institutional support: RVO:67985823 Keywords : dissolution * estimation * rate constant Subject RIV: FR - Pharmacology ; Medidal Chemistry Impact factor: 2.006, year: 2013

  19. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  20. Simple approximation for estimating centerline gamma absorbed dose rates due to a continuous Gaussian plume

    International Nuclear Information System (INIS)

    Overcamp, T.J.; Fjeld, R.A.

    1987-01-01

    A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution

  1. Dynamic Data-Driven Prediction of Lean Blowout in a Swirl-Stabilized Combustor

    Directory of Open Access Journals (Sweden)

    Soumalya Sarkar

    2015-09-01

    Full Text Available This paper addresses dynamic data-driven prediction of lean blowout (LBO phenomena in confined combustion processes, which are prevalent in many physical applications (e.g., land-based and aircraft gas-turbine engines. The underlying concept is built upon pattern classification and is validated for LBO prediction with time series of chemiluminescence sensor data from a laboratory-scale swirl-stabilized dump combustor. The proposed method of LBO prediction makes use of the theory of symbolic dynamics, where (finite-length time series data are partitioned to produce symbol strings that, in turn, generate a special class of probabilistic finite state automata (PFSA. These PFSA, called D-Markov machines, have a deterministic algebraic structure and their states are represented by symbol blocks of length D or less, where D is a positive integer. The D-Markov machines are constructed in two steps: (i state splitting, i.e., the states are split based on their information contents, and (ii state merging, i.e., two or more states (of possibly different lengths are merged together to form a new state without any significant loss of the embedded information. The modeling complexity (e.g., number of states of a D-Markov machine model is observed to be drastically reduced as the combustor approaches LBO. An anomaly measure, based on Kullback-Leibler divergence, is constructed to predict the proximity of LBO. The problem of LBO prediction is posed in a pattern classification setting and the underlying algorithms have been tested on experimental data at different extents of fuel-air premixing and fuel/air ratio. It is shown that, over a wide range of fuel-air premixing, D-Markov machines with D > 1 perform better as predictors of LBO than those with D = 1.

  2. Estimation of shutdown heat generation rates in GHARR-1 due to ...

    African Journals Online (AJOL)

    Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...

  3. Diversity Dynamics in Nymphalidae Butterflies: Effect of Phylogenetic Uncertainty on Diversification Rate Shift Estimates

    Science.gov (United States)

    Peña, Carlos; Espeland, Marianne

    2015-01-01

    The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910

  4. Diversity dynamics in Nymphalidae butterflies: effect of phylogenetic uncertainty on diversification rate shift estimates.

    Directory of Open Access Journals (Sweden)

    Carlos Peña

    Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.

  5. Use of Pyranometers to Estimate PV Module Degradation Rates in the Field: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit

    2016-08-01

    This paper describes a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.

  6. Estimation of longitudinal force, lateral vehicle speed and yaw rate for four-wheel independent driven electric vehicles

    Science.gov (United States)

    Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong

    2018-02-01

    Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.

  7. Mutation frequencies in male mice and the estimation of genetic hazards of radiation in men: (specific-locus mutations/dose-rate effect/doubling dose/risk estimation)

    International Nuclear Information System (INIS)

    Russell, W.L.; Kelly, E.M.

    1982-01-01

    Estimation of the genetic hazards of ionizing radiation in men is based largely on the frequency of transmitted specific-locus mutations induced in mouse spermatogonial stem cells at low radiation dose rates. The publication of new data on this subject has permitted a fresh review of all the information available. The data continue to show no discrepancy from the interpretation that, although mutation frequency decreases markedly as dose rate is decreased from 90 to 0.8 R/min (1 R = 2.6 X 10 -4 coulombs/kg) there seems to be no further change below 0.8 R/min over the range from that dose rate to 0.0007 R/min. Simple mathematical models are used to compute: (a) a maximum likelihood estimate of the induced mutation frequency at the low dose rates, and (b) a maximum likelihood estimate of the ratio of this to the mutation frequency at high dose rates in the range of 72 to 90 R/min. In the application of these results to the estimation of genetic hazards of radiation in man, the former value can be used to calculate a doubling dose - i.e., the dose of radiation that induces a mutation frequency equal to the spontaneous frequency. The doubling dose based on the low-dose-rate data compiled here is 110 R. The ratio of the mutation frequency at low dose rate to that at high dose rate is useful when it becomes necessary to extrapolate from experimental determinations, or from human data, at high dose rates to the expected risk at low dose rates. The ratio derived from the present analysis is 0.33

  8. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  9. Development of dose rate estimation system for FBR maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Iizawa, Katsuyuki [Japan Nuclear Cycle Development Inst., Tsuruga Head Office, International Cooperation and Technology Development Center, Tsuruga, Fukui (Japan); Takeuchi, Jun; Yoshikawa, Satoru [Hitachi Engineering Company, Ltd., Hitachi, Ibaraki (Japan); Urushihara, Hiroshi [Ibaraki Hitachi Information Service Co., Ltd., Omika, Ibaraki (Japan)

    2001-09-01

    During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)

  10. Development of dose rate estimation system for FBR maintenance

    International Nuclear Information System (INIS)

    Iizawa, Katsuyuki; Takeuchi, Jun; Yoshikawa, Satoru; Urushihara, Hiroshi

    2001-01-01

    During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)

  11. Estimating blue whale skin isotopic incorporation rates and baleen growth rates: Implications for assessing diet and movement patterns in mysticetes

    Science.gov (United States)

    Busquets-Vass, Geraldine; Newsome, Seth D.; Calambokidis, John; Serra-Valente, Gabriela; Jacobsen, Jeff K.; Aguíñiga-García, Sergio; Gendron, Diane

    2017-01-01

    Stable isotope analysis in mysticete skin and baleen plates has been repeatedly used to assess diet and movement patterns. Accurate interpretation of isotope data depends on understanding isotopic incorporation rates for metabolically active tissues and growth rates for metabolically inert tissues. The aim of this research was to estimate isotopic incorporation rates in blue whale skin and baleen growth rates by using natural gradients in baseline isotope values between oceanic regions. Nitrogen (δ15N) and carbon (δ13C) isotope values of blue whale skin and potential prey were analyzed from three foraging zones (Gulf of California, California Current System, and Costa Rica Dome) in the northeast Pacific from 1996–2015. We also measured δ15N and δ13C values along the lengths of baleen plates collected from six blue whales stranded in the 1980s and 2000s. Skin was separated into three strata: basale, externum, and sloughed skin. A mean (±SD) skin isotopic incorporation rate of 163±91 days was estimated by fitting a generalized additive model of the seasonal trend in δ15N values of skin strata collected in the Gulf of California and the California Current System. A mean (±SD) baleen growth rate of 15.5±2.2 cm y-1 was estimated by using seasonal oscillations in δ15N values from three whales. These oscillations also showed that individual whales have a high fidelity to distinct foraging zones in the northeast Pacific across years. The absence of oscillations in δ15N values of baleen sub-samples from three male whales suggests these individuals remained within a specific zone for several years prior to death. δ13C values of both whale tissues (skin and baleen) and potential prey were not distinct among foraging zones. Our results highlight the importance of considering tissue isotopic incorporation and growth rates when studying migratory mysticetes and provide new insights into the individual movement strategies of blue whales. PMID:28562625

  12. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  13. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  14. Estimating the Effective Lower Bound for the Czech National Bank's Policy Rate

    OpenAIRE

    Kolcunova, Dominika; Havranek, Tomas

    2018-01-01

    The paper focuses on the estimation of the effective lower bound for the Czech National Bank's policy rate. The effective lower bound is determined by the value below which holding and using cash would be more convenient than deposits with negative yields. This bound is approximated based on storage, the insurance and transportation costs of cash and the costs associated with the loss of the convenience of cashless payments and complemented with the estimate based on interest charges, which p...

  15. Comparing facility-level methane emission rate estimates at natural gas gathering and boosting stations

    Directory of Open Access Journals (Sweden)

    Timothy L. Vaughn

    2017-11-01

    Full Text Available Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA. On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates (“study on-site estimates (SOE” comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust, which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted by modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83–0.99, R2 = 0.89. Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER estimated by tracer measurements in this study is 17–73% higher than a prior national study by Marchese et al.

  16. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  17. Glomerular filtration rate estimated from the uptake phase of 99mTc-DTPA renography in chronic renal failure

    DEFF Research Database (Denmark)

    Petersen, L J; Petersen, J R; Talleruphuus, U

    1999-01-01

    The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea.......The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea....

  18. Localised photoplethysmography imaging for heart rate estimation of pre-term infants in the clinic

    Science.gov (United States)

    Chaichulee, Sitthichok; Villarroel, Mauricio; Jorge, João.; Arteta, Carlos; Green, Gabrielle; McCormick, Kenny; Zisserman, Andrew; Tarassenko, Lionel

    2018-02-01

    Non-contact vital-sign estimation allows the monitoring of physiological parameters (such as heart rate, respiratory rate, and peripheral oxygen saturation) without contact electrodes or sensors. Our recent work has demonstrated that a convolutional neural network (CNN) can be used to detect the presence of a patient and segment the patient's skin area for vital-sign estimation, thus enabling the automatic continuous monitoring of vital signs in a hospital environment. In a study approved by the local Research Ethical Committee, we made video recordings of pre-term infants nursed in a Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford, UK. We extended the CNN model to detect the head, torso and diaper of the infants. We extracted multiple photoplethysmographic imaging (PPGi) signals from each body part, analysed their signal quality, and compared them with the PPGi signal derived from the entire skin area. Our results demonstrated the benefits of estimating heart rate combined from multiple regions of interest using data fusion. In the test dataset, we achieved a mean absolute error of 2.4 beats per minute for 80% (31.1 hours) from a total recording time of 38.5 hours for which both reference heart rate and video data were valid.

  19. Effect of sociality and season on gray wolf (Canis lupus) foraging behavior: implications for estimating summer kill rate.

    Science.gov (United States)

    Metz, Matthew C; Vucetich, John A; Smith, Douglas W; Stahler, Daniel R; Peterson, Rolf O

    2011-03-01

    Understanding how kill rates vary among seasons is required to understand predation by vertebrate species living in temperate climates. Unfortunately, kill rates are only rarely estimated during summer. For several wolf packs in Yellowstone National Park, we used pairs of collared wolves living in the same pack and the double-count method to estimate the probability of attendance (PA) for an individual wolf at a carcass. PA quantifies an important aspect of social foraging behavior (i.e., the cohesiveness of foraging). We used PA to estimate summer kill rates for packs containing GPS-collared wolves between 2004 and 2009. Estimated rates of daily prey acquisition (edible biomass per wolf) decreased from 8.4±0.9 kg (mean ± SE) in May to 4.1±0.4 kg in July. Failure to account for PA would have resulted in underestimating kill rate by 32%. PA was 0.72±0.05 for large ungulate prey and 0.46±0.04 for small ungulate prey. To assess seasonal differences in social foraging behavior, we also evaluated PA during winter for VHF-collared wolves between 1997 and 2009. During winter, PA was 0.95±0.01. PA was not influenced by prey size but was influenced by wolf age and pack size. Our results demonstrate that seasonal patterns in the foraging behavior of social carnivores have important implications for understanding their social behavior and estimating kill rates. Synthesizing our findings with previous insights suggests that there is important seasonal variation in how and why social carnivores live in groups. Our findings are also important for applications of GPS collars to estimate kill rates. Specifically, because the factors affecting the PA of social carnivores likely differ between seasons, kill rates estimated through GPS collars should account for seasonal differences in social foraging behavior.

  20. The importance of ingestion rates for estimating food quality and energy intake.

    Science.gov (United States)

    Schülke, Oliver; Chalise, Mukesh K; Koenig, Andreas

    2006-10-01

    Testing ecological or socioecological models in primatology often requires estimates of individual energy intake. It is a well established fact that the nutrient content (and hence the energy content) of primate food items is highly variable. The second variable in determining primate energy intake, i.e., the ingestion rate, has often been ignored, and few studies have attempted to estimate the relative importance of the two predictors. In the present study individual ingestion rates were measured in two ecologically very different populations of Hanuman langurs (Semnopithecus entellus) at Jodhpur, India, and Ramnagar, Nepal. Protein and soluble sugar concentrations in 50 and 100 food items. respectively, were measured using standardized methods. Variation in ingestion rates (gram of dry matter per minute) was markedly greater among food items than among langur individuals in both populations, but did not differ systematically among food item categories defined according to plant part and age. General linear models (GLMs) with ingestion rate, protein, and soluble sugar content explained 40-80% of the variation in energy intake rates (kJ/min). The relative importance of ingestion rates was either similar (Ramnagar) or much greater (Jodhpur) than the role of sugar and/or protein content in determining the energy intake rates of different items. These results may impact socioecological studies of variation in individual energy budgets, investigations of food choice in relation to chemical composition or sensory characteristics, and research into habitat preferences that measures habitat quality in terms of abundance of important food sources. We suggest a definition of food quality that includes not only the amount of valuable food contents (energy, vitamins, and minerals) and the digestibility of different foods, but also the rate at which the food can be harvested and processed. Such an extended definition seems necessary because time may constrain primates when

  1. Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.

    Science.gov (United States)

    Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle

    2008-08-01

    Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.

  2. Supporting information for the estimation of plutonium oxide leak rates through very small apertures

    International Nuclear Information System (INIS)

    Schwendiman, L.C.

    1977-01-01

    Information is presented from which an estimate can be made of the release of plutonium oxide from shipping containers. The leak diameter is estimated from gas leak tests of the container and an estimate is made of gas leak rate as a function of pressure over the time of interest in the accident. These calculations are limited in accuracy because of assumptions regarding leak geometry and the basic formulations of hydrodynamic flow for the assumed conditions. Sonic flow is assumed to be the limiting gas flow rate. Particles leaking from the air space above the powder will be limited by the low availability of particles due to rapid settling, the very limited driving force (pressure buildup) during the first minute, and the deposition in the leak channel. Equations are given to estimate deposition losses. Leaks of particles occurring below the level of the bulk powder will be limited by mechanical interference when leaks are of dimension smaller than particle sizes present. Some limiting cases can be calculated. When the leak dimension is large compared to the particle sizes present, maximum particle releases can be estimated, but will be very conservative

  3. ESTIMATING SPECIATION AND EXTINCTION RATES FROM DIVERSITY DATA AND THE FOSSIL RECORD

    NARCIS (Netherlands)

    Etienne, Rampal S.; Apol, M. Emile F.

    Understanding the processes that underlie biodiversity requires insight into the evolutionary history of the taxa involved. Accurate estimation of speciation, extinction, and diversification rates is a prerequisite for gaining this insight. Here, we develop a stochastic birth-death model of

  4. An Empirical Comparison between Two Recursive Filters for Attitude and Rate Estimation of Spinning Spacecraft

    Science.gov (United States)

    Harman, Richard R.

    2006-01-01

    The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.

  5. An estimator for the relative entropy rate of path measures for stochastic differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Opper, Manfred, E-mail: manfred.opper@tu-berlin.de

    2017-02-01

    We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.

  6. Empirical estimation of astrophysical photodisintegration rates of 106Cd

    Science.gov (United States)

    Belyshev, S. S.; Kuznetsov, A. A.; Stopani, K. A.

    2017-09-01

    It has been noted in previous experiments that the ratio between the photoneutron and photoproton disintegration channels of 106Cd might be considerably different from predictions of statistical models. The thresholds of these reactions differ by several MeV and the total astrophysical rate of photodisintegration of 106Cd, which is mostly produced in photonuclear reactions during the p-process nucleosynthesis, might be noticeably different from the calculated value. In this work the bremsstrahlung beam of a 55.6 MeV microtron and the photon activation technique is used to measure yields of photonuclear reaction products on isotopically-enriched cadmium targets. The obtained results are compared with predictions of statistical models. The experimental yields are used to estimate photodisintegration reaction rates on 106Cd, which are then used in nuclear network calculations to examine the effects of uncertainties on the produced abundences of p-nuclei.

  7. An estimation of the exchange rate pass-through to prices in Mexico

    OpenAIRE

    Josué Fernando Cortés Espada

    2013-01-01

    This paper estimates the magnitude of the exchange rate pass-through to consumer prices in Mexico. Moreover, it analyzes if the pass-through dynamics have changed in recent years. In particular, it uses a methodology that generates results consistent with the hierarchy implicit in the CPI. The results suggest that the exchange rate pass-through to the general price level is low and not statistically significant. However, the pass-through is positive and significant for goods prices. Furthermo...

  8. Estimation of construction and demolition waste using waste generation rates in Chennai, India.

    Science.gov (United States)

    Ram, V G; Kalidindi, Satyanarayana N

    2017-06-01

    A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.

  9. Control rod blow out protection system

    International Nuclear Information System (INIS)

    Dietrich, J.R.; Flinn, W.S.; Groves, M.D.

    1979-01-01

    An absorber element blow-out protection system for a nuclear reactor having a pressure vessel through which a coolant is circulated. The blow-out protection system includes devices which hydraulically couple groups of guide tubes in which the absorber elements move, the coupling devices limiting coolant lift flow rate to a level commensurate with the raising of individual absorber elements but insufficient to raise a plurality of absorber elements simultaneously

  10. Estimation of Oil Production Rates in Reservoirs Exposed to Focused Vibrational Energy

    KAUST Repository

    Jeong, Chanseok; Kallivokas, Loukas F.; Huh, Chun; Lake, Larry W.

    2014-01-01

    the production rate of remaining oil from existing oil fields. To date, there are few theoretical studies on estimating how much bypassed oil within an oil reservoir could be mobilized by such vibrational stimulation. To fill this gap, this paper presents a

  11. Improved estimates of external gamma dose rates in the environs of Hinkley Point Power Station

    International Nuclear Information System (INIS)

    Macdonald, H.F.; Thompson, I.M.G.

    1988-07-01

    The dominant source of external gamma dose rates at centres of population within a few kilometres of Hinkley Point Power Station is the routine discharge of 41-Ar from the 'A' station magnox reactors. Earlier estimates of the 41-Ar radiation dose rates were based upon measured discharge rates, combined with calculations using standard plume dispersion and cloud-gamma integration models. This report presents improved dose estimates derived from environmental gamma dose rate measurements made at distances up to about 1 km from the site, thus minimising the degree of extrapolation introduced in estimating dose rates at locations up to a few kilometres from the site. In addition, results from associated chemical tracer measurements and wind tunnel simulations covering distances up to about 4 km from the station are outlined. These provide information on the spatial distribution of the 41-Ar plume during the initial stages of its dispersion, including effects due to plume buoyancy and momentum and behaviour under light wind conditions. In addition to supporting the methodology used for the 41-Ar dose calculations, this information is also of generic interest in the treatment of a range of operational and accidental releases from nuclear power station sites and will assist in the development and validation of existing environmental models. (author)

  12. Joint sensor location/power rating optimization for temporally-correlated source estimation

    KAUST Repository

    Bushnaq, Osama M.

    2017-12-22

    The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.

  13. Can we estimate bacterial growth rates from ribosomal RNA content?

    Energy Technology Data Exchange (ETDEWEB)

    Kemp, P.F.

    1995-12-31

    Several studies have demonstrated a strong relationship between the quantity of RNA in bacterial cells and their growth rate under laboratory conditions. It may be possible to use this relationship to provide information on the activity of natural bacterial communities, and in particular on growth rate. However, if this approach is to provide reliably interpretable information, the relationship between RNA content and growth rate must be well-understood. In particular, a requisite of such applications is that the relationship must be universal among bacteria, or alternately that the relationship can be determined and measured for specific bacterial taxa. The RNA-growth rate relationship has not been used to evaluate bacterial growth in field studies, although RNA content has been measured in single cells and in bulk extracts of field samples taken from coastal environments. These measurements have been treated as probable indicators of bacterial activity, but have not yet been interpreted as estimators of growth rate. The primary obstacle to such interpretations is a lack of information on biological and environmental factors that affect the RNA-growth rate relationship. In this paper, the available data on the RNA-growth rate relationship in bacteria will be reviewed, including hypotheses regarding the regulation of RNA synthesis and degradation as a function of growth rate and environmental factors; i.e. the basic mechanisms for maintaining RNA content in proportion to growth rate. An assessment of the published laboratory and field data, the current status of this research area, and some of the remaining questions will be presented.

  14. Experimental design and estimation of growth rate distributions in size-structured shrimp populations

    International Nuclear Information System (INIS)

    Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K

    2009-01-01

    We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations

  15. A Comparison of Evidence-Based Estimates and Empirical Benchmarks of the Appropriate Rate of Use of Radiation Therapy in Ontario

    International Nuclear Information System (INIS)

    Mackillop, William J.; Kong, Weidong; Brundage, Michael; Hanna, Timothy P.; Zhang-Salomons, Jina; McLaughlin, Pierre-Yves; Tyldesley, Scott

    2015-01-01

    Purpose: Estimates of the appropriate rate of use of radiation therapy (RT) are required for planning and monitoring access to RT. Our objective was to compare estimates of the appropriate rate of use of RT derived from mathematical models, with the rate observed in a population of patients with optimal access to RT. Methods and Materials: The rate of use of RT within 1 year of diagnosis (RT 1Y ) was measured in the 134,541 cases diagnosed in Ontario between November 2009 and October 2011. The lifetime rate of use of RT (RT LIFETIME ) was estimated by the multicohort utilization table method. Poisson regression was used to evaluate potential barriers to access to RT and to identify a benchmark subpopulation with unimpeded access to RT. Rates of use of RT were measured in the benchmark subpopulation and compared with published evidence-based estimates of the appropriate rates. Results: The benchmark rate for RT 1Y , observed under conditions of optimal access, was 33.6% (95% confidence interval [CI], 33.0%-34.1%), and the benchmark for RT LIFETIME was 41.5% (95% CI, 41.2%-42.0%). Benchmarks for RT LIFETIME for 4 of 5 selected sites and for all cancers combined were significantly lower than the corresponding evidence-based estimates. Australian and Canadian evidence-based estimates of RT LIFETIME for 5 selected sites differed widely. RT LIFETIME in the overall population of Ontario was just 7.9% short of the benchmark but 20.9% short of the Australian evidence-based estimate of the appropriate rate. Conclusions: Evidence-based estimates of the appropriate lifetime rate of use of RT may overestimate the need for RT in Ontario

  16. Deterministic estimation of crack growth rates in steels in LWR coolant environments

    International Nuclear Information System (INIS)

    Macdonald, D.D.; Lu, P.C.; Urquidi-Macdonald, M.

    1995-01-01

    In this paper, the authors extend the coupled environment fracture model (CEFM) for intergranular stress corrosion cracking (IGSCC) of sensitized Type 304SS in light water reactor heat transport circuits by incorporating steel corrosion, the oxidation of hydrogen, and the reduction of hydrogen peroxide, in addition to the reduction of oxygen (as in the original CEFM), as charge transfer reactions occurring on the external surfaces. Additionally, the authors have incorporated a theoretical approach for estimating the crack tip strain rate, and the authors have included a void nucleation model to account for ductile failure at very negative potentials. The key concept of the CEFM is that coupling between the internal and external environments, and the need to conserve charge, are the physical and mathematical constraints that determine the rate of crack advance. The model provides rational explanations for the effects of oxygen, hydrogen peroxide, hydrogen, conductivity, stress intensity, and flow velocity on the rate of crack growth in sensitized Type 304 in simulated LWR in-vessel environments. They propose that the CEFM can serve as the basis of a deterministic method for estimating component life times

  17. Estimating the Number of Heterosexual Persons in the United States to Calculate National Rates of HIV Infection.

    Directory of Open Access Journals (Sweden)

    Amy Lansky

    Full Text Available This study estimated the proportions and numbers of heterosexuals in the United States (U.S. to calculate rates of heterosexually acquired human immunodeficiency virus (HIV infection. Quantifying the burden of disease can inform effective prevention planning and resource allocation.Heterosexuals were defined as males and females who ever had sex with an opposite-sex partner and excluded those with other HIV risks: persons who ever injected drugs and males who ever had sex with another man. We conducted meta-analysis using data from 3 national probability surveys that measured lifetime (ever sexual activity and injection drug use among persons aged 15 years and older to estimate the proportion of heterosexuals in the United States population. We then applied the proportion of heterosexual persons to census data to produce population size estimates. National HIV infection rates among heterosexuals were calculated using surveillance data (cases attributable to heterosexual contact in the numerators and the heterosexual population size estimates in the denominators.Adult and adolescent heterosexuals comprised an estimated 86.7% (95% confidence interval: 84.1%-89.3% of the U.S. population. The estimate for males was 84.1% (CI: 81.2%-86.9% and for females was 89.4% (95% CI: 86.9%-91.8%. The HIV diagnosis rate for 2013 was 5.2 per 100,000 heterosexuals and the rate of persons living with diagnosed HIV infection in 2012 was 104 per 100,000 heterosexuals aged 13 years or older. Rates of HIV infection were >20 times as high among black heterosexuals compared to white heterosexuals, indicating considerable disparity. Rates among heterosexual men demonstrated higher disparities than overall population rates for men.The best available data must be used to guide decision-making for HIV prevention. HIV rates among heterosexuals in the U.S. are important additions to cost effectiveness and other data used to make critical decisions about resources for

  18. Glomerular Filtration Rate Estimation in Renal and Non-Renal Solid Organ Transplantation

    DEFF Research Database (Denmark)

    Hornum, Mads; Feldt-Rasmussen, Bo

    2017-01-01

    Following transplantation (TX) of both renal and non-renal organs, a large proportion of patients have renal dysfunction. There are multiple causes for this. Chronic nephrotoxicity and high doses of calcineurin inhibitors are important factors. Preoperative and perioperative factors like...... or estimates of renal function in these patients, in order to accurately and safely dose immunosuppressive medication and perform and adjust the treatment and prophylaxis of renal dysfunction. This is a short overview and discussion of relevant studies and possible caveats of estimated glomerular filtration...... rate methods for use in renal and non-renal TX....

  19. Estimating Exchange Rate Exposure over Various Return Horizons: Focusing on Major Countries in East Asia

    Directory of Open Access Journals (Sweden)

    Jeong Wook Lee

    2016-12-01

    Full Text Available In this paper, we estimate the exchange rate exposure, indicating the effect of exchange rate movements on firm values, for a sample of 1,400 firms in seven East Asian countries. The exposure estimates based on various exchange rate variables, return horizons and a control variable are compared. A key result from our analysis is that the long term effect of exchange rate movements on firm values is greater than the short term effect. And we find very similar results from using other exchange rate variables such as the U.S. dollar exchange rate, etc. Second, we add exchange rate volatility as a control variable and find that the extent of exposure is not much changed. Third, we examine the changes in exposure to exchange rate volatility with an increase in return horizon. Consequently the ratio of firms with significant exposures increases with the return horizons. Interestingly, the increase of exposure with the return horizons is faster for exposure to volatility than for exposure to exchange rate itself. Taken as a whole, our findings suggest that the so-called "exposure puzzle" may be a matter of the methodology used to measure exposure.

  20. Estimates of deep drainage rates at the U.S. Department of Energy Pantex Plant, Amarillo, Texas

    International Nuclear Information System (INIS)

    Fayer, M.J.; Richmond, M.C.; Wigmosta, M.S.; Kelley, M.E.

    1998-04-01

    In FY 1996, the Pacific Northwest National Laboratory (PNNL) provided technical assistance to Battelle Columbus Operations (BCO) in their ongoing assessment of contaminant migration at the Pantex Plant in Amarillo, Texas. The objective of this report is to calculate deep drainage rates at the Pantex Plant. These deep drainage rates may eventually be used to predict contaminant loading to the underlying unconfined aquifer for the Pantex Plant Baseline Risk Assessment. These rates will also be used to support analyses of remedial activities involving surface alterations or the subsurface injection withdrawal of liquids or gases. The scope of this report is to estimate deep drainage rates for the major surface features at the Pantex Plant, including ditches and playas, natural grassland, dryland crop rotation, unvegetated soil, and graveled surfaces. Areas such as Pantex Lake that are outside the main plant boundaries were not included in the analysis. All estimates were derived using existing data or best estimates; no new data were collected. The modeling framework used to estimate the rates is described to enable future correlations, improvements, and enhancements. The scope of this report includes only data gathered during FY 1996. However, a current review of the data gathered on weather, soil, plants, and other information in the time period since did not reveal anything that would significantly alter the results presented in this report

  1. Estimation of social discount rate for Lithuania

    Directory of Open Access Journals (Sweden)

    Vilma Kazlauskiene

    2016-09-01

    Full Text Available Purpose of the article: The paper seeks to analyse the problematics of estimation of the social discount rate (SDR. The SDR is the critical parameter of cost-benefit analysis, which allows calculating the present value of cost and the benefit of public sector investment projects. Incorrect choice of the SDR can lead to the realisation of ineffective public project or conversely, cost-effective project will be rejected. The relevance of this problem analysis is determined by discussions and different viewpoints of scientists on the choice of the most appropriate approach to determine the SDR and absence of methodically based the SDR on the national level of Lithuania. Methodology/methods: The research is performed by the scientific and methodical literature analysis, systematization, time series and regression analysis. Scientific aim: The aim of the article is to calculate the SDR based on the statistical data of Lithuania. Findings: The analysis of methods of SDR determination, as well as the researches performed by foreign researchers, allows stating that the social rate of time preference (SRTP approach is the most appropriate. The SDR, calculated by the SRTP approach, reflects the main purpose of public investment projects, i.e. to enhance social benefit for society, the best. The analyses of SDR determination practice of the foreign countries shows that the SDR level should not be universal for all states. Each country should calculate the SDR based on its own data and apply it for the assessment of public projects. Conclusions: The calculated SDR for Lithuania using the SRTP approach varies between 3.5% and 4.3%. Although it is lower than 5% that is offered by European Commission, this rate is based on the statistical data of Lithuania and should be used for the assessment of the national public projects. Applicatio

  2. Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey

    Science.gov (United States)

    2014-01-01

    Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945

  3. Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.

    Science.gov (United States)

    Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk

    2014-01-01

    This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.

  4. Measurement of the incorporation rates of four amino acids into proteins for estimating bacterial production.

    Science.gov (United States)

    Servais, P

    1995-03-01

    In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.

  5. Burst Format Design for Optimum Joint Estimation of Doppler-Shift and Doppler-Rate in Packet Satellite Communications

    Directory of Open Access Journals (Sweden)

    Luca Giugno

    2007-05-01

    Full Text Available This paper considers the problem of optimizing the burst format of packet transmission to perform enhanced-accuracy estimation of Doppler-shift and Doppler-rate of the carrier of the received signal, due to relative motion between the transmitter and the receiver. Two novel burst formats that minimize the Doppler-shift and the Doppler-rate Cramér-Rao bounds (CRBs for the joint estimation of carrier phase/Doppler-shift and of the Doppler-rate are derived, and a data-aided (DA estimation algorithm suitable for each optimal burst format is presented. Performance of the newly derived estimators is evaluated by analysis and by simulation, showing that such algorithms attain their relevant CRBs with very low complexity, so that they can be directly embedded into new-generation digital modems for satellite communications at low SNR.

  6. Fiscal 1998 geothermal development promotion research. Survey in Akinomiya area (N9-AY-3 short-term blowout test report); 1998 nendo chinetsu kaihatsu sokushin chosa. Akinomiya chiiki chosa (N9-AY-3 tanki funshutsu shiken hokokusho)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    As a part of the fiscal 1998 geothermal development promotion survey in Akinomiya area, this test was carried out to obtain the fumarolic gas characteristics and fluid properties of N9-AY-3 test well by air reaction method. Blowout hot water was reinjected to N8-AY-1 test well drilled in fiscal 1996 to confirm its reinjection capacity. The fumarolic gas test result showed a steam volume of 35.0t/h and hot water volume of 160.8t/h (wellhead pressure equivalent) at a wellhead secondary valve opening of 33.7% and wellhead pressure of 15.3kg/cm{sup 2}G. The whole hot water volume of 30,214m{sup 3} is equivalent to nearly 320 times more than a well volume. Hot water showed pH7.7-8.6, an electric conductivity of 3,960-7,070{mu}S/cm, and chlorine ion concentration of 1,050 mg/l. Alkalescent Na-Cl system hot water showed a geochemical temperature of 283 degreesC. A steam gas concentration including CO{sub 2} as main component was 0.12-0.19vol%. Blowout hot water was continuously reinjected into N8-AY-1 well from a pit by pump to confirm its reinjection capacity. The reinjection capacity of hot water in fumarolic gas was 110t/h without any problem. (NEDO)

  7. Estimate of the global-scale joule heating rates in the thermosphere due to time mean currents

    International Nuclear Information System (INIS)

    Roble, R.G.; Matsushita, S.

    1975-01-01

    An estimate of the global-scale joule heating rates in the thermosphere is made based on derived global equivalent overhead electric current systems in the dynamo region during geomagnetically quiet and disturbed periods. The equivalent total electric field distribution is calculated from Ohm's law. The global-scale joule heating rates are calculated for various monthly average periods in 1965. The calculated joule heating rates maximize at high latitudes in the early evening and postmidnight sectors. During geomagnetically quiet times the daytime joule heating rates are considerably lower than heating by solar EUV radiation. However, during geomagnetically disturbed periods the estimated joule heating rates increase by an order of magnitude and can locally exceed the solar EUV heating rates. The results show that joule heating is an important and at times the dominant energy source at high latitudes. However, the global mean joule heating rates calculated near solar minimum are generally small compared to the global mean solar EUV heating rates. (auth)

  8. Effect of sociality and season on gray wolf (Canis lupus foraging behavior: implications for estimating summer kill rate.

    Directory of Open Access Journals (Sweden)

    Matthew C Metz

    Full Text Available BACKGROUND: Understanding how kill rates vary among seasons is required to understand predation by vertebrate species living in temperate climates. Unfortunately, kill rates are only rarely estimated during summer. METHODOLOGY/PRINCIPAL FINDINGS: For several wolf packs in Yellowstone National Park, we used pairs of collared wolves living in the same pack and the double-count method to estimate the probability of attendance (PA for an individual wolf at a carcass. PA quantifies an important aspect of social foraging behavior (i.e., the cohesiveness of foraging. We used PA to estimate summer kill rates for packs containing GPS-collared wolves between 2004 and 2009. Estimated rates of daily prey acquisition (edible biomass per wolf decreased from 8.4±0.9 kg (mean ± SE in May to 4.1±0.4 kg in July. Failure to account for PA would have resulted in underestimating kill rate by 32%. PA was 0.72±0.05 for large ungulate prey and 0.46±0.04 for small ungulate prey. To assess seasonal differences in social foraging behavior, we also evaluated PA during winter for VHF-collared wolves between 1997 and 2009. During winter, PA was 0.95±0.01. PA was not influenced by prey size but was influenced by wolf age and pack size. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that seasonal patterns in the foraging behavior of social carnivores have important implications for understanding their social behavior and estimating kill rates. Synthesizing our findings with previous insights suggests that there is important seasonal variation in how and why social carnivores live in groups. Our findings are also important for applications of GPS collars to estimate kill rates. Specifically, because the factors affecting the PA of social carnivores likely differ between seasons, kill rates estimated through GPS collars should account for seasonal differences in social foraging behavior.

  9. Erratum Haldane and the first estimates of the human mutation rate

    Indian Academy of Sciences (India)

    Published on the Web: 1 December 2008. Erratum. Haldane and the first estimates of the human mutation rate. (A commentary on J.B.S. Haldane 1935 J. Genet. 31, 317–326; reprinted in volume 83, 235–244 as a J. Genet. classic). Michael W. Nachman. J. Genet. 83, 231–233. Page 1, right column, para 1, line 6 from ...

  10. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  11. Estimation of Leak Rate from the Emergency Pump Well in L-Area Complex Basin

    International Nuclear Information System (INIS)

    Duncan, A

    2005-01-01

    This report provides an estimate of the leak rate from the emergency pump well in L-basin that is to be expected during an off-normal event. This estimate is based on expected shrinkage of the engineered grout (i.e., controlled low strength material) used to fill the emergency pump well and the header pipes that provide the dominant leak path from the basin to the lower levels of the L-Area Complex. The estimate will be used to provide input into the operating safety basis to ensure that the water level in the basin will remain above a certain minimum level. The minimum basin water level is specified to ensure adequate shielding for personnel and maintain the ''as low as reasonably achievable'' concept of radiological exposure. The need for the leak rate estimation is the existence of a gap between the fill material and the header pipes, which penetrate the basin wall and would be the primary leak path in the event of a breach in those pipes. The gap between the pipe and fill material was estimated based on a full scale demonstration pour that was performed and examined. Leak tests were performed on full scale pipes as a part of this examination. Leak rates were measured to be on the order of 0.01 gallons/minute for completely filled pipe (vertically positioned) and 0.25 gallons/minute for partially filled pipe (horizontally positioned). This measurement was for water at 16 feet head pressure and with minimal corrosion or biofilm present. The effect of the grout fill on the inside surface biofilm of the pipes is the subject of a previous memorandum

  12. Prediction of cardiovascular outcome by estimated glomerular filtration rate and estimated creatinine clearance in the high-risk hypertension population of the VALUE trial.

    Science.gov (United States)

    Ruilope, Luis M; Zanchetti, Alberto; Julius, Stevo; McInnes, Gordon T; Segura, Julian; Stolt, Pelle; Hua, Tsushung A; Weber, Michael A; Jamerson, Ken

    2007-07-01

    Reduced renal function is predictive of poor cardiovascular outcomes but the predictive value of different measures of renal function is uncertain. We compared the value of estimated creatinine clearance, using the Cockcroft-Gault formula, with that of estimated glomerular filtration rate (GFR), using the Modification of Diet in Renal Disease (MDRD) formula, as predictors of cardiovascular outcome in 15 245 high-risk hypertensive participants in the Valsartan Antihypertensive Long-term Use Evaluation (VALUE) trial. For the primary end-point, the three secondary end-points and for all-cause death, outcomes were compared for individuals with baseline estimated creatinine clearance and estimated GFR or = 60 ml/min using hazard ratios and 95% confidence intervals. Coronary heart disease, left ventricular hypertrophy, age, sex and treatment effects were included as covariates in the model. For each end-point considered, the risk in individuals with poor renal function at baseline was greater than in those with better renal function. Estimated creatinine clearance (Cockcroft-Gault) was significantly predictive only of all-cause death [hazard ratio = 1.223, 95% confidence interval (CI) = 1.076-1.390; P = 0.0021] whereas estimated GFR was predictive of all outcomes except stroke. Hazard ratios (95% CIs) for estimated GFR were: primary cardiac end-point, 1.497 (1.332-1.682), P cause death, 1.231 (1.098-1.380), P = 0.0004. These results indicate that estimated glomerular filtration rate calculated with the MDRD formula is more informative than estimated creatinine clearance (Cockcroft-Gault) in the prediction of cardiovascular outcomes.

  13. Estimates of wave decay rates in the presence of turbulent currents

    Energy Technology Data Exchange (ETDEWEB)

    Thais, L. [Universite des Sciences et Technologies de Lille, URA-CNRS 1441, Villenauve d' Ascq (France). Lab. de Mecanique; Chapalain, G. [Universite des Sciences et Technologies de Lille, URA-CNRS 8577, Villenauve d' Ascq (France). Sedimentologie et Geodynamique; Klopman, G. [Albatros Flow Research, Vollenhove (Netherlands); Simons, R.R. [University College, London (United Kingdom). Civil and Environmental Engineering; Thomas, G.P. [University College, Cork (Ireland). Dept. of Mathematical Physics

    2001-06-01

    A full-depth numerical model solving the free surface flow induced by linear water waves propagating with collinear vertically sheared turbulent currents is presented. The model is used to estimate the wave amplitude decay rate in combined wave current flows. The decay rates are compared with data collected in wave flumes by Kemp and Simons [J Fluid Mech, 116 (1982) 227; 130 (1983) 73] and Mathisen and Madsen [J Geophys Res, 101 (C7) (1996) 16,533]. We confirm the main experimental finding of Kemp and Simons that waves propagating downstream are less damped, and waves propagating upstream significantly more damped than waves on fluid at rest. A satisfactory quantitative agreement is found for the decay rates of waves propagating upstream, whereas not more than a qualitative agreement has been observed for waves propagating downstream. Finally, some wave decay rates in the presence of favourable and adverse currents are provided in typical field conditions. (Author)

  14. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    Science.gov (United States)

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Dynamic study of ocular movement with MR imaging in orbital blow-out fracture

    International Nuclear Information System (INIS)

    Aibara, Ryuichi; Kawakita, Seiji; Matsumoto, Yasushi; Sadamoto, Masanori; Yumoto, Eiji.

    1996-01-01

    Operative indications for orbital blow-out fracture (OBF) remain controversial. One of the major sources of this controversy is that an accurate diagnosis of ocular movement disturbances can not be made by conventional procedures such as the Hess screen test, traction test, or CT scan. Disturbances in ocular movement resulting from OBF can occur not only with entrapment of the extraocular muscle but also with intraorbital bleeding, edema, and/or a variety of other unclear factors. To obtain a more accurate diagnosis and to assist in the choice of treatment, ocular movement was examined using orbital 'cine mode' MR imaging. MR images were obtained in multiple phases of vertical and horizontal ocular movements by using the 'fast SE' capabilities of the SIERRA, GE-YMS MR scanner (1.5 Tesla, superconductive). The fixed eye method was applied to two normal volunteers and to patients with 'pure' OBF. Five marks for binocular fixation were affixed to the inner wall of the gantry: one at the primary position and four at secondary positions. While keeping the subject's eye focused on each of these marks for about 30 sec, MR images (head coil) of the axial view and bilateral oblique sagittal view along the optic nerve were carried out. In the normal volunteers, a good demonstration of smooth movement of the eye ball, extraocular muscles, and the optic nerve could be obtained. In the OBF patients, it was clearly observed that the disturbance in ocular movement was caused by poor extension of the external ocular muscles, specifically the inferior rectus muscle in the orbital floor fracture, and the internal rectus muscle in the medial wall fracture. These observations suggested that dynamic orbital imaging with MR would be extremely valuable in the assessment of disturbances of ocular movement in OBF. (author)

  16. Indirectly estimated absolute lung cancer mortality rates by smoking status and histological type based on a systematic review

    International Nuclear Information System (INIS)

    Lee, Peter N; Forey, Barbara A

    2013-01-01

    National smoking-specific lung cancer mortality rates are unavailable, and studies presenting estimates are limited, particularly by histology. This hinders interpretation. We attempted to rectify this by deriving estimates indirectly, combining data from national rates and epidemiological studies. We estimated study-specific absolute mortality rates and variances by histology and smoking habit (never/ever/current/former) based on relative risk estimates derived from studies published in the 20 th century, coupled with WHO mortality data for age 70–74 for the relevant country and period. Studies with populations grossly unrepresentative nationally were excluded. 70–74 was chosen based on analyses of large cohort studies presenting rates by smoking and age. Variations by sex, period and region were assessed by meta-analysis and meta-regression. 148 studies provided estimates (Europe 59, America 54, China 22, other Asia 13), 54 providing estimates by histology (squamous cell carcinoma, adenocarcinoma). For all smoking habits and lung cancer types, mortality rates were higher in males, the excess less evident for never smokers. Never smoker rates were clearly highest in China, and showed some increasing time trend, particularly for adenocarcinoma. Ever smoker rates were higher in parts of Europe and America than in China, with the time trend very clear, especially for adenocarcinoma. Variations by time trend and continent were clear for current smokers (rates being higher in Europe and America than Asia), but less clear for former smokers. Models involving continent and trend explained much variability, but non-linearity was sometimes seen (with rates lower in 1991–99 than 1981–90), and there was regional variation within continent (with rates in Europe often high in UK and low in Scandinavia, and higher in North than South America). The indirect method may be questioned, because of variations in definition of smoking and lung cancer type in the

  17. Indirectly estimated absolute lung cancer mortality rates by smoking status and histological type based on a systematic review

    Science.gov (United States)

    2013-01-01

    Background National smoking-specific lung cancer mortality rates are unavailable, and studies presenting estimates are limited, particularly by histology. This hinders interpretation. We attempted to rectify this by deriving estimates indirectly, combining data from national rates and epidemiological studies. Methods We estimated study-specific absolute mortality rates and variances by histology and smoking habit (never/ever/current/former) based on relative risk estimates derived from studies published in the 20th century, coupled with WHO mortality data for age 70–74 for the relevant country and period. Studies with populations grossly unrepresentative nationally were excluded. 70–74 was chosen based on analyses of large cohort studies presenting rates by smoking and age. Variations by sex, period and region were assessed by meta-analysis and meta-regression. Results 148 studies provided estimates (Europe 59, America 54, China 22, other Asia 13), 54 providing estimates by histology (squamous cell carcinoma, adenocarcinoma). For all smoking habits and lung cancer types, mortality rates were higher in males, the excess less evident for never smokers. Never smoker rates were clearly highest in China, and showed some increasing time trend, particularly for adenocarcinoma. Ever smoker rates were higher in parts of Europe and America than in China, with the time trend very clear, especially for adenocarcinoma. Variations by time trend and continent were clear for current smokers (rates being higher in Europe and America than Asia), but less clear for former smokers. Models involving continent and trend explained much variability, but non-linearity was sometimes seen (with rates lower in 1991–99 than 1981–90), and there was regional variation within continent (with rates in Europe often high in UK and low in Scandinavia, and higher in North than South America). Conclusions The indirect method may be questioned, because of variations in definition of smoking and

  18. A simplified 137Cs transport model for estimating erosion rates in undisturbed soil

    International Nuclear Information System (INIS)

    Zhang Xinbao; Long Yi; He Xiubin; Fu Jiexiong; Zhang Yunqi

    2008-01-01

    137 Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of 137 Cs fallout from atmosphere in 1963. 137 Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The 137 Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the 137 Cs depth distribution in profile, where the maximum 137 Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total 137 Cs fallout amount deposited on the earth surface in 1963 and the 137 Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of 137 Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the 137 Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different 137 Cs loss proportions of the reference inventory at the Kaixian site of the

  19. Estimates of the rate and distribution of fitness effects of spontaneous mutation in Saccharomyces cerevisiae

    NARCIS (Netherlands)

    Zeyl, C.; Visser, de J.A.G.M.

    2001-01-01

    The per-genome, per-generation rate of spontaneous mutation affecting fitness (U) and the mean fitness cost per mutation (s) are important parameters in evolutionary genetics, but have been estimated for few species. We estimated U and sh (the heterozygous effect of mutations) for two diploid yeast

  20. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  1. Improving the rainfall rate estimation in the midstream of the Heihe River Basin using raindrop size distribution

    Directory of Open Access Journals (Sweden)

    G. Zhao

    2011-03-01

    Full Text Available During the intensive observation period of the Watershed Allied Telemetry Experimental Research (WATER, a total of 1074 raindrop size distribution were measured by the Parsivel disdrometer, the latest state-of-the-art optical laser instrument. Because of the limited observation data in Qinghai-Tibet Plateau, the modelling behaviour was not well done. We used raindrop size distributions to improve the rain rate estimator of meteorological radar in order to obtain many accurate rain rate data in this area. We got the relationship between the terminal velocity of the raindrop and the diameter (mm of a raindrop: v(D = 4.67D0.53. Then four types of estimators for X-band polarimetric radar are examined. The simulation results show that the classical estimator R (ZH is most sensitive to variations in DSD and the estimator R (KDP, ZH, ZDR is the best estimator for estimating the rain rate. An X-band polarimetric radar (714XDP is used for verifying these estimators. The lowest sensitivity of the rain rate estimator R (KDP, ZH, ZDR to variations in DSD can be explained by the following facts. The difference in the forward-scattering amplitudes at horizontal and vertical polarizations, which contributes KDP, is proportional to the 3rd power of the drop diameter. On the other hand, the exponent of the backscatter cross-section, which contributes to ZH, is proportional to the 6th power of the drop diameter. Because the rain rate R is proportional to the 3.57th power of the drop diameter, KDP is less sensitive to DSD variations than ZH.

  2. Urological disorders in chronic kidney disease in children cohort: clinical characteristics and estimation of glomerular filtration rate.

    Science.gov (United States)

    Dodson, Jennifer L; Jerry-Fluker, Judith V; Ng, Derek K; Moxey-Mims, Marva; Schwartz, George J; Dharnidharka, Vikas R; Warady, Bradley A; Furth, Susan L

    2011-10-01

    Urological disorders are the most common cause of pediatric chronic kidney disease. We determined the characteristics of children with urological disorders and assessed the agreement between the newly developed bedside glomerular filtration rate estimating formula with measured glomerular filtration rate in 586 patients in the Chronic Kidney Disease in Children study. The Chronic Kidney Disease in Children study is a prospective, observational cohort of children recruited from 48 sites in the United States and Canada. Eligibility requirements include age 1 to 16 years and estimated glomerular filtration rate by original Schwartz formula 30 to 90 ml/min/1.73 m(2). Baseline demographics, clinical variables and glomerular filtration rate were assessed. Bland-Altman analysis was conducted to assess agreement between estimated and measured glomerular filtration rates. Of the 586 participants with at least 1 glomerular filtration rate measurement 348 (59%) had an underlying urological diagnosis (obstructive uropathy in 118, aplastic/hypoplastic/dysplastic kidneys in 104, reflux in 87 and other condition in 39). Among these patients median age was 9 years, duration of chronic kidney disease was 7 years and age at first visit with a urologist was less than 1 year. Of the patients 67% were male, 67% were white and 21% had a low birth weight. Median height was in the 24th percentile. Median glomerular filtration rate as measured by iohexol plasma disappearance was 44.8 ml/min/1.73 m(2). Median glomerular filtration rate as estimated by the Chronic Kidney Disease in Children bedside equation was 44.3 ml/min/1.73 m(2) (bias = -0.5, 95% CI -1.7 to 0.7, p = 0.44). Underlying urological causes of chronic kidney disease were present in 59% of study participants. These children were diagnosed early in life, and many had low birth weight and growth delay. There is good agreement between the newly developed Chronic Kidney Disease in Children estimating equations and measured

  3. Carotid blowout syndrome in pharyngeal cancer patients treated by hypofractionated stereotactic re-irradiation using CyberKnife: A multi-institutional matched-cohort analysis

    International Nuclear Information System (INIS)

    Yamazaki, Hideya; Ogita, Mikio; Himei, Kengo; Nakamura, Satoaki; Kotsuma, Tadayuki; Yoshida, Ken; Yoshioka, Yasuo

    2015-01-01

    Background and purpose: Although reirradiation has attracted attention as a potential therapy for recurrent head and neck tumors with the advent of modern radiotherapy, severe rate toxicity such as carotid blowout syndrome (CBOS) limits its potential. The aim of this study was to identify the risk factors of CBOS after hypofractionated stereotactic radiotherapy (SBRT). Methods and patients: We conducted a matched-pair design examination of pharyngeal cancer patients treated by CyberKnife reirradiation in four institutes. Twelve cases with CBOS were observed per 60 cases without CBOS cases. Prognostic factors for CBOS were analyzed and a risk classification model was constructed. Results: The median prescribed radiation dose was 30 Gy in 5 fractions with CyberKnife SBRT after 60 Gy/30 fractions of previous radiotherapy. The median duration between reirradiation and CBOS onset was 5 months (range, 0–69 months). CBOS cases showed a median survival time of 5.5 months compared to 22.8 months for non-CBOS cases (1-year survival rate, 36% vs.72%; p = 0.003). Univariate analysis identified an angle of carotid invasion of >180°, the presence of ulceration, planning treatment volume, and irradiation to lymph node areas as statistically significant predisposing factors for CBOS. Only patients with carotid invasion of >180° developed CBOS (12/50, 24%), whereas no patient with tumor involvement less than a half semicircle around the carotid artery developed CBOS (0/22, 0%, p = 0.03). Multivariate Cox hazard model analysis revealed that the presence of ulceration and irradiation to lymph nodes were statistically significant predisposing factors. Thus, we constructed a CBOS risk classification system: CBOS index = (summation of risk factors; carotid invasion >180°, presence of ulceration, lymph node area irradiation). This system sufficiently separated the risk groups. Conclusion: The presence of ulceration and lymph node irradiation are risk factors of CBOS. The CBOS index

  4. Real-time data for estimating a forward-looking interest rate rule of the ECB

    Directory of Open Access Journals (Sweden)

    Tilman Bletzinger

    2017-12-01

    Full Text Available The purpose of the data presented in this article is to use it in ex post estimations of interest rate decisions by the European Central Bank (ECB, as it is done by Bletzinger and Wieland (2017 [1]. The data is of quarterly frequency from 1999 Q1 until 2013 Q2 and consists of the ECB's policy rate, inflation rate, real output growth and potential output growth in the euro area. To account for forward-looking decision making in the interest rate rule, the data consists of expectations about future inflation and output dynamics. While potential output is constructed based on data from the European Commission's annual macro-economic database, inflation and real output growth are taken from two different sources both provided by the ECB: the Survey of Professional Forecasters and projections made by ECB staff. Careful attention was given to the publication date of the collected data to ensure a real-time dataset only consisting of information which was available to the decision makers at the time of the decision. Keywords: Interest rate rule estimation, Real-time data, Forward-looking data

  5. Estimation of build up of dose rate on U3O8 product drum

    International Nuclear Information System (INIS)

    Pandey, J.P.N.; Shinde, A.M.; Deshpande, M.D.

    2008-01-01

    In fuel reprocessing plant, plutonium oxide and uranium oxide (U 3 O 8 ) are products. Approximately 180 kg U 3 O 8 is filled in SS drum and sealed firmly before storage. In PHWR natural uranium (UO 2 ) is used as fuel. In natural uranium, thorium-232 is present as an impurity at few tens of ppm level. During irradiation in power reactors, due to nuclear reaction formation of 232 U from 232 Th takes place. Natural decay of 232 U leads to the formation of 208 Tl. As time passes, there is buildup of 208 Tl and hence increase in dose rate on the drum containing U 3 O 8 . It is essential to estimate the buildup of dose rate considering the external radiological hazards involved during U 3 O 8 drum handling, transportation and fuel fabrication. This paper describes the calculation of dose rate on drum in future years using MCNP code. For dose rate calculation decay of fission product activity which remains as contamination in product and build up of '2 08 Tl from 232 U is considered. Some measured values of dose rate on U 3 O 8 drum are given for the comparisons with estimated dose rate based on MCNP code. (author)

  6. Estimation of uranium resources by life-cycle or discovery-rate models: a critique

    International Nuclear Information System (INIS)

    Harris, D.P.

    1976-10-01

    This report was motivated primarily by M. A. Lieberman's ''United States Uranium Resources: An Analysis of Historical Data'' (Science, April 30). His conclusion that only 87,000 tons of U 3 O 8 resources recoverable at a forward cost of $8/lb remain to be discovered is criticized. It is shown that there is no theoretical basis for selecting the exponential or any other function for the discovery rate. Some of the economic (productivity, inflation) and data issues involved in the analysis of undiscovered, recoverable U 3 O 8 resources on discovery rates of $8 reserves are discussed. The problem of the ratio of undiscovered $30 resources to undiscovered $8 resources is considered. It is concluded that: all methods for the estimation of unknown resources must employ a model of some form of the endowment-exploration-production complex, but every model is a simplification of the real world, and every estimate is intrinsically uncertain. The life-cycle model is useless for the appraisal of undiscovered, recoverable U 3 O 8 , and the discovery rate model underestimates these resources

  7. Estimating Collisionally-Induced Escape Rates of Light Neutrals from Early Mars

    Science.gov (United States)

    Gacesa, M.; Zahnle, K. J.

    2016-12-01

    Collisions of atmospheric gases with hot oxygen atoms constitute an important non-thermal mechanism of escape of light atomic and molecular species at Mars. In this study, we present revised theoretical estimates of non-thermal escape rates of neutral O, H, He, and H2 based on recent atmospheric density profiles obtained from the NASA Mars Atmosphere and Volatile Evolution (MAVEN) mission and related theoretical models. As primary sources of hot oxygen, we consider dissociative recombination of O2+ and CO2+ molecular ions. We also consider hot oxygen atoms energized in primary and secondary collisions with energetic neutral atoms (ENAs) produced in charge-exchange of solar wind H+ and He+ ions with atmospheric gases1,2. Scattering of hot oxygen and atmospheric species of interest is modeled using fully-quantum reactive scattering formalism3. This approach allows us to construct distributions of vibrationally and rotationally excited states and predict the products' emission spectra. In addition, we estimate formation rates of excited, translationally hot hydroxyl molecules in the upper atmosphere of Mars. The escape rates are calculated from the kinetic energy distributions of the reaction products using an enhanced 1D model of the atmosphere for a range of orbital and solar parameters. Finally, by considering different scenarios, we estimate the influence of these escape mechanisms on the evolution of Mars's atmosphere throughout previous epochs and their impact on the atmospheric D/H ratio. M.G.'s research was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Universities Space Research Association under contract with NASA. 1N. Lewkow and V. Kharchenko, "Precipitation of Energetic Neutral Atoms and Escape Fluxes induced from the Mars Atmosphere", Astroph. J., 790, 98 (2014) 2M. Gacesa, N. Lewkow, and V. Kharchenko, "Non-thermal production and escape of OH from the upper atmosphere of Mars", arXiv:1607

  8. Estimation of groundwater flow rate using the decay of 222Rn in a well

    International Nuclear Information System (INIS)

    Hamada, Hiromasa

    1999-01-01

    A method of estimating groundwater flow rate using the decay of 222 Rn in a well was investigated. Field application revealed that infiltrated water (i.e., precipitation, pond water and irrigation water) accelerated groundwater flow. In addition, the depth at which groundwater was influenced by surface water was determined. The velocity of groundwater in a test well was estimated to be of the order of 10 -6 cm s -1 , based on the ratio of 222 Rn concentration in groundwater before and after it flowed into the well. This method is applicable for monitoring of groundwater flow rate where the velocity in a well is from 10 -5 to 10 -6 cm s -1

  9. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  10. Estimating morbidity rates from electronic medical records in general practice: evaluation of a grouping system.

    NARCIS (Netherlands)

    Biermans, M.C.J.; Verheij, R.A.; Bakker, D.H. de; Zielhuis, G.A.; Vries Robbé, P.F. de

    2008-01-01

    Objectives: In this study, we evaluated the internal validity of EPICON, an application for grouping ICPCcoded diagnoses from electronic medical records into episodes of care. These episodes are used to estimate morbidity rates in general practice. Methods: Morbidity rates based on EPICON were

  11. A comparative review of estimates of the proportion unchanged genes and the false discovery rate

    Directory of Open Access Journals (Sweden)

    Broberg Per

    2005-08-01

    Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information

  12. An estimation of the domain of attraction and convergence rate for Hopfield continuous feedback neural networks

    International Nuclear Information System (INIS)

    Cao Jinde

    2004-01-01

    In this Letter, the domain of attraction of memory patterns and exponential convergence rate of the network trajectories to memory patterns for Hopfield continuous associative memory are estimated by means of matrix measure and comparison principle. A new estimation is given for the domain of attraction of memory patterns and exponential convergence rate. These results can be used for the evaluation of fault-tolerance capability and the synthesis procedures for Hopfield continuous feedback associative memory neural networks

  13. Endovascular treatment of radiation-induced carotid blowout syndrome. Report of two cases

    International Nuclear Information System (INIS)

    Adachi, Akihiko; Kobayashi, Eiichi; Watanabe, Yoshiyuki; Yoneyama, Tomoko S.; Hayasaka, Michihiro; Suzuki, Homare; Okamoto, Yoshitaka; Saeki, Naokatsu

    2011-01-01

    Carotid Blowout Syndrome (CBS), or Carotid Artery Rupture (CAR), is a delayed complication with potentially fatal consequences occurring after the implementation of radiotherapy on head and neck tumors. In this report we describe two patients received endovascular treatment for severe hemorrhagic CBS developing 36 and 2 years, respectively, after radiotherapy. Both patients survived and responded positively to treatment. Case 1 was an 80-year-old woman found with minor hemorrhage near the bifurcation of the common carotid artery, 36 years after neck irradiation. She experienced frequent hemorrhagic events during the following years. Six years after the initial discovery of bleeding, she experienced massive hemorrhage, lapsed into shock, and was admitted to an Emergency Room. Connective tissue around the carotid artery was largely exposed due to neck skin defect. After hemorrhage was halted by manual compression, transient hemostasis was achieved with coil embolization of the aneurysm presumed to be the source of bleeding. Recurrent hemorrhage developed two weeks later with unraveled coil mass extrusion. Parent artery occlusion was performed by endovascular trapping, achieving permanent hemostasis. Case 2 presented massive nasal bleeding originating from the petrous segment of the internal carotid artery, 2 years after having been treated with heavy particle irradiation for olfactory neuroblastoma. Ischemic tolerance was confirmed by balloon occlusion test. Based on previous experiences, the bleeding was immediately halted by endovascular trapping. Both patients were subsequently discharged, free of new neurological symptoms. Emergent hemostatic treatment is required in CBS developing severe hemorrhage. However, within irradiation fields, temporal embolization devices hardly lead to complete resolution. This is due to the deteriorated condition of the vascular wall incapable to enduring the expansion power of coils, stents or balloons. Bypass grafting is also

  14. Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.

    Science.gov (United States)

    Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris

    2018-03-01

    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Estimation of water erosion rates using RUSLE3D in Alicante province (Spain)

    OpenAIRE

    Garcia Rodríguez, Jose Luis; Giménez Suárez, Martín Cruz; Arraiza Bermudez-Cañete, Maria Paz

    2015-01-01

    The purpose of this study was the estimation of current and potential water erosion rates in Alicante Province using RUSLE3D (Revised Universal Soil Loss Equation-3D) model with Geographical Information System (GIS) support by request from the Valencia Waste Energy Use. RUSLE3D uses a new methodology for topographic factor estimation (LS factor) based on the impact of flow convergence allowing better assessment of sediment distribution detached by water erosion. In RUSLE3D equation, the effec...

  16. On the estimation of failure rates for living PSAs in the presence of model uncertainty

    International Nuclear Information System (INIS)

    Arsenis, S.P.

    1994-01-01

    The estimation of failure rates of heterogeneous Poisson components from data on times operated to failures is reviewed. Particular emphasis is given to the lack of knowledge on the form of the mixing distribution or population variability curve. A new nonparametric epirical Bayes estimation is proposed which generalizes the estimator of Robbins for different times of observations for the components. The behavior of the estimator is discussed by reference to two samples typically drawn from the CEDB, a component event database designed and operated by the Ispra JRC

  17. State-Space Dynamic Model for Estimation of Radon Entry Rate, based on Kalman Filtering

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Jílek, K.

    2007-01-01

    Roč. 98, - (2007), s. 285-297 ISSN 0265-931X Grant - others:GA SÚJB JC_11/2006 Institutional research plan: CEZ:AV0Z10300504 Keywords : air ventilation rate * radon entry rate * state-space modeling * extended Kalman filter * maximum likelihood estimation * prediction error decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.963, year: 2007

  18. ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS) SEDIMENT EROSION RATE USING NUMERICAL MODELLING

    OpenAIRE

    Kátia Kellem Rosa; Rosemary Vieira; Jefferson Cardia Simões

    2013-01-01

    Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands). This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the l...

  19. A new balance formula to estimate new particle formation rate: reevaluating the effect of coagulation scavenging

    Directory of Open Access Journals (Sweden)

    R. Cai

    2017-10-01

    Full Text Available A new balance formula to estimate new particle formation rate is proposed. It is derived from the aerosol general dynamic equation in the discrete form and then converted into an approximately continuous form for analyzing data from new particle formation (NPF field campaigns. The new formula corrects the underestimation of the coagulation scavenging effect that occurred in the previously used formulae. It also clarifies the criteria for determining the upper size bound in measured aerosol size distributions for estimating new particle formation rate. An NPF field campaign was carried out from 7 March to 7 April 2016 in urban Beijing, and a diethylene glycol scanning mobility particle spectrometer equipped with a miniature cylindrical differential mobility analyzer was used to measure aerosol size distributions down to ∼ 1 nm. Eleven typical NPF events were observed during this period. Measured aerosol size distributions from 1 nm to 10 µm were used to test the new formula and the formulae widely used in the literature. The previously used formulae that perform well in a relatively clean atmosphere in which nucleation intensity is not strong were found to underestimate the comparatively high new particle formation rate in urban Beijing because of their underestimation or neglect of the coagulation scavenging effect. The coagulation sink term is the governing component of the estimated formation rate in the observed NPF events in Beijing, and coagulation among newly formed particles contributes a large fraction to the coagulation sink term. Previously reported formation rates in Beijing and in other locations with intense NPF events might be underestimated because the coagulation scavenging effect was not fully considered; e.g., estimated formation rates of 1.5 nm particles in this campaign using the new formula are 1.3–4.3 times those estimated using the formula neglecting coagulation among particles in the nucleation mode.

  20. Estimates of Radiation Dose Rates Near Large Diameter Sludge Containers in T Plant

    CERN Document Server

    Himes, D A

    2002-01-01

    Dose rates in T Plant canyon during the handling and storage of large diameter storage containers of K Basin sludge were estimated. A number of different geometries were considered from which most operational situations of interest can be constructed.

  1. An estimation of reactor thermal power uncertainty using UFM-based feedwater flow rate in nuclear power plants

    International Nuclear Information System (INIS)

    Byung Ryul Jung; Ho Cheol Jang; Byung Jin Lee; Se Jin Baik; Woo Hyun Jang

    2005-01-01

    Most of Pressurized Water Reactors (PWRs) utilize the venturi meters (VMs) to measure the feedwater (FW) flow rate to the steam generator in the calorimetric measurement, which is used in the reactor thermal power (RTP) estimation. However, measurement drifts have been experienced due to some anomalies on the venturi meter (generally called the venturi meter fouling). The VM's fouling tends to increase the measured pressure drop across the meter, which results in indication of increased feedwater flow rate. Finally, the reactor thermal power is overestimated and the actual reactor power is to be reduced to remain within the regulatory limits. To overcome this VM's fouling problem, the Ultrasonic Flow Meter (UFM) has recently been gaining attention in the measurement of the feedwater flow rate. This paper presents the applicability of a UFM based feedwater flow rate in the estimation of reactor thermal power uncertainty. The FW and RTP uncertainties are compared in terms of sensitivities between the VM- and UFM-based feedwater flow rates. Data from typical Optimized Power Reactor 1000 (OPR1000) plants are used to estimate the uncertainty. (authors)

  2. Do central banks respond to exchange rate movements? Some new evidence from structural estimation

    OpenAIRE

    Wei Dong

    2013-01-01

    This paper investigates the impact of exchange rate movements on the conduct of monetary policy in Australia, Canada, New Zealand and the United Kingdom. We develop and estimate a structural general equilibrium two-sector model with sticky prices and wages and limited exchange rate pass-through. Different specifications for the monetary policy rule and the real exchange rate process are examined. The results indicate that the Reserve Bank of Australia, the Bank of Canada and the Bank of Engla...

  3. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    Science.gov (United States)

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  4. A variational technique to estimate snowfall rate from coincident radar, snowflake, and fall-speed observations

    Science.gov (United States)

    Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.

    2017-07-01

    Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.

  5. Something from nothing: Estimating consumption rates using propensity scores, with application to emissions reduction policies.

    Directory of Open Access Journals (Sweden)

    Nicholas Bardsley

    Full Text Available Consumption surveys often record zero purchases of a good because of a short observation window. Measures of distribution are then precluded and only mean consumption rates can be inferred. We show that Propensity Score Matching can be applied to recover the distribution of consumption rates. We demonstrate the method using the UK National Travel Survey, in which c.40% of motorist households purchase no fuel. Estimated consumption rates are plausible judging by households' annual mileages, and highly skewed. We apply the same approach to estimate CO2 emissions and outcomes of a carbon cap or tax. Reliance on means apparently distorts analysis of such policies because of skewness of the underlying distributions. The regressiveness of a simple tax or cap is overstated, and redistributive features of a revenue-neutral policy are understated.

  6. Rate of formation of neutron stars in the galaxy estimated from stellar statistics

    International Nuclear Information System (INIS)

    Endal, A.S.

    1979-01-01

    Stellar statistics and stellar evolution models can be used to estimate the rate of formation of neutron stars in the Galaxy. A recent analysis by Hills suggests that the mean interval between neutron-star births is greater than 27 years. This is incompatible with estimates based on pulsar statistics. However, a closer examination of the stellar data shows that Hill's result is incorrect. A mean interval between neutron-star births as short as 4 years is consistent with (though certainly not required by) stellar evolution theory

  7. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    Gong, R [Georgia Institute of Technology; Lu, C [Georgia Institute of Technology; Luo, Jian [Georgia Institute of Technology; Wu, Wei-min [Stanford University; Cheng, H. [Stanford University; Criddle, Craig [Stanford University; Kitanidis, Peter K. [Stanford University; Gu, Baohua [ORNL; Watson, David B [ORNL; Jardine, Philip M [ORNL; Brooks, Scott C [ORNL

    2011-03-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.

  8. Estimating the effect of treatment rate changes when treatment benefits are heterogeneous: antibiotics and otitis media.

    Science.gov (United States)

    Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George

    2008-01-01

    Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.

  9. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora

    Directory of Open Access Journals (Sweden)

    Ryosuke Takahira

    2016-10-01

    Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.

  10. Estimation of exposure to 222Rn from the excretion rates of 21πPb

    International Nuclear Information System (INIS)

    Holtzman, R.B.; Rundo, J.

    1981-01-01

    A model is proposed with which estimates of exposure to 227 Rn and its daughter products may be made from urinary excretion rates of 210 Pb. It is assumed that 20% of all the 210 Pb inhaled reaches the blood and that 50% of the endogenous excretion is through the urine. The estimates from the model are compared with the results of measurements on a subject residing in a house with high levels of radon. Whole body radioactivity and excretion data were consistent with the model, but the estimates of exposure (WL) were higher than those measured with an Environmental Working Level Monitor

  11. Gamma exposure rate estimation in irradiation facilities of nuclear research reactors

    International Nuclear Information System (INIS)

    Daoud, Adrian

    2009-01-01

    There are experimental situations in the nuclear field, in which dose estimations due to energy-dependent radiation fields are required. Nuclear research reactors provide such fields under normal operation or due to radioactive disintegration of fission products and structural materials activation. In such situations, it is necessary to know the exposure rate of gamma radiation the different materials under experimentation are subject to. Detectors of delayed reading are usually used for this purpose. Direct evaluation methods using portable monitors are not always possible, because in some facilities the entrance with such devices is often impracticable and also unsafe. Besides, these devices only provide information of the place where the measurement was performed, but not of temporal and spatial fluctuations the radiation fields could have. In this work a direct evaluation method was developed for the 'in-situ' gamma exposure rate for the irradiation facilities of the RA-1 reactor. This method is also applicable in any similar installation, and may be complemented by delayed evaluations without problem. On the other hand, it is well known that the residual effect of radiation modifies some properties of the organic materials used in reactors, such as density, colour, viscosity, oxidation level, among others. In such cases, a correct dosimetric evaluation enables in service estimation of material duration with preserved properties. This evaluation is for instance useful when applied to lubricating oils for the primary circuit pumps in nuclear power plants, thus minimizing waste generation. In this work the necessary elements required to estimate in-situ time and space integrated dose are also established for a gamma irradiated sample in an irradiation channel of a nuclear facility with zero neutron flux. (author)

  12. Basic Technology and Clinical Applications of the Updated Model of Laser Speckle Flowgraphy to Ocular Diseases

    Directory of Open Access Journals (Sweden)

    Tetsuya Sugiyama

    2014-08-01

    Full Text Available Laser speckle flowgraphy (LSFG allows for quantitative estimation of blood flow in the optic nerve head (ONH, choroid and retina, utilizing the laser speckle phenomenon. The basic technology and clinical applications of LSFG-NAVI, the updated model of LSFG, are summarized in this review. For developing a commercial version of LSFG, the special area sensor was replaced by the ordinary charge-coupled device camera. In LSFG-NAVI, the mean blur rate (MBR has been introduced as a new parameter. Compared to the original LSFG model, LSFG-NAVI demonstrates a better spatial resolution of the blood flow map of human ocular fundus. The observation area is 24 times larger than the original system. The analysis software can separately calculate MBRs in the blood vessels and tissues (capillaries of an entire ONH and the measurements have good reproducibility. The absolute values of MBR in the ONH have been shown to linearly correlate with the capillary blood flow. The Analysis of MBR pulse waveform provides parameters including skew, blowout score, blowout time, rising and falling rates, flow acceleration index, acceleration time index, and resistivity index for comparing different eyes. Recently, there have been an increasing number of reports on the clinical applications of LSFG-NAVI to ocular diseases, including glaucoma, retinal and choroidal diseases.

  13. Estimates of the Tempo-adjusted Total Fertility Rate in Western and Eastern Germany, 1955-2008

    Directory of Open Access Journals (Sweden)

    Marc Luy

    2011-09-01

    Full Text Available In this article we present estimates of the tempo-adjusted total fertility rate in Western and Eastern Germany from 1955 to 2008. Tempo adjustment of the total fertility rate (TFR requires data on the annual number of births by parity and age of the mother. Since official statistics do not provide such data for West Germany as well as Eastern Germany from 1990 on we used alternative data sources which include these specific characteristics. The combined picture of conventional TFR and tempo-adjusted TFR* provides interesting information about the trends in period fertility in Western and Eastern Germany, above all with regard to the differences between the two regions and the enormous extent of tempo effects in Eastern Germany during the 1990s. Compared to corresponding data for populations from other countries, our estimates of the tempo-adjusted TFR* for Eastern and Western Germany show plausible trends. Nevertheless, it is important to note that the estimates of the tempo-adjusted total fertility rate presented in this paper should not be seen as being on the level of or equivalent to official statistics since they are based on different kinds of data with different degrees of quality.

  14. Flame stability and heat transfer analysis of methane-air mixtures in catalytic micro-combustors

    International Nuclear Information System (INIS)

    Chen, Junjie; Song, Wenya; Xu, Deguang

    2017-01-01

    Highlights: • The mechanisms of heat and mass transfer for loss of stability were elucidated. • Stability diagrams were constructed and design recommendations were made. • Flame characteristics were examined to determine extinction and blowout limits. • Heat loss greatly affects extinction whereas wall materials greatly affect blowout. • Radiation causes the flame to shift downstream. - Abstract: The flame stability and heat transfer characteristics of methane-air mixtures in catalytic micro-combustors were studied, using a two-dimensional computational fluid dynamics (CFD) model with detailed chemistry and transport. The effects of wall thermal conductivity, surface emissivity, fuel, flow velocity, and equivalence ratio were explored to provide guidelines for optimal design. Furthermore, the underlying mechanisms of heat and mass transfer for loss of flame stability were elucidated. Finally, stability diagrams were constructed and design recommendations were made. It was found that the heat loss strongly affects extinction, whereas the wall thermal conductivity greatly affects blowout. The presence of homogeneous chemistry extends blowout limits, especially for inlet velocities higher than 6 m/s. Increasing transverse heat transfer rate reduces stability, whereas increasing transverse mass transfer rate improves stability. Surface radiation behaves similarly to the heat conduction within the walls, but opposite trends are observed. High emissivity causes the flame to shift downstream. Methane exhibits much broader blowout limits. For a combustor with gap size of 0.8 mm, a residence time higher than 3 ms is required to prevent breakthrough, and inlet velocities lower than 0.8 m/s are the most desirable operation regime. Further increase of the wall thermal conductivity beyond 80 W/(m·K) could not yield an additional increase in stability.

  15. Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.

    Science.gov (United States)

    Fu, H; Darroch, J E; Haas, T; Ranjit, N

    1999-01-01

    Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by

  16. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Science.gov (United States)

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  17. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    Directory of Open Access Journals (Sweden)

    M. D. Peláez-Coca

    2013-01-01

    Full Text Available A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}% and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%. The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration.

  18. Environmental radioactivity in the UK: the airborne geophysical view of dose rate estimates

    International Nuclear Information System (INIS)

    Beamish, David

    2014-01-01

    This study considers UK airborne gamma-ray data obtained through a series of high spatial resolution, low altitude surveys over the past decade. The ground concentrations of the naturally occurring radionuclides Potassium, Thorium and Uranium are converted to air absorbed dose rates and these are used to assess terrestrial exposure levels from both natural and technologically enhanced sources. The high resolution airborne information is also assessed alongside existing knowledge from soil sampling and ground-based measurements of exposure levels. The surveys have sampled an extensive number of the UK lithological bedrock formations and the statistical information provides examples of low dose rate lithologies (the formations that characterise much of southern England) to the highest sustained values associated with granitic terrains. The maximum dose rates (e.g. >300 nGy h −1 ) encountered across the sampled granitic terrains are found to vary by a factor of 2. Excluding granitic terrains, the most spatially extensive dose rates (>50 nGy h −1 ) are found in association with the Mercia Mudstone Group (Triassic argillaceous mudstones) of eastern England. Geological associations between high dose rate and high radon values are also noted. Recent studies of the datasets have revealed the extent of source rock (i.e. bedrock) flux attenuation by soil moisture in conjunction with the density and porosity of the temperate latitude soils found in the UK. The presence or absence of soil cover (and associated presence or absence of attenuation) appears to account for a range of localised variations in the exposure levels encountered. The hypothesis is supported by a study of an extensive combined data set of dose rates obtained from soil sampling and by airborne geophysical survey. With no attenuation factors applied, except those intrinsic to the airborne estimates, a bias to high values of between 10 and 15 nGy h −1 is observed in the soil data. A wide range of

  19. Estimation of rate constants of PCB dechlorination reactions using an anaerobic dehalogenation model.

    Science.gov (United States)

    Karakas, Filiz; Imamoglu, Ipek

    2017-02-15

    This study aims to estimate anaerobic dechlorination rate constants (k m ) of reactions of individual PCB congeners using data from four laboratory microcosms set up using sediment from Baltimore Harbor. Pathway k m values are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model (ADM) which can be applied to any halogenated hydrophobic organic (HOC). Improvements such as handling multiple dechlorination activities (DAs) and co-elution of congeners, incorporating constraints, using new goodness of fit evaluation led to an increase in accuracy, speed and flexibility of ADM. DAs published in the literature in terms of chlorine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The best fit explaining the congener pattern changes was found for pathways of Phylotype DEH10, which has the ability to remove doubly flanked chlorines in meta and para positions, para flanked chlorines in meta position. The range of estimated k m values is between 0.0001-0.133d -1 , the median of which is found to be comparable to the few available published biologically confirmed rate constants. Compound specific modelling studies such as that performed by ADM can enable monitoring and prediction of concentration changes as well as toxicity during bioremediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Estimation of the dynamics and rate of transmission of classical swine fever (hog cholera) in wild pigs.

    Science.gov (United States)

    Hone, J; Pech, R; Yip, P

    1992-04-01

    Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed.

  1. Estimation of the rate and number of underreported deliberate self-poisoning attempts in western Iran in 2015

    Directory of Open Access Journals (Sweden)

    Mehdi Moradinazar

    2017-06-01

    Full Text Available OBJECTIVES Rates of attempted deliberate self-poisoning (DSP are subject to undercounting, underreporting, and denial of the suicide attempt. In this study, we estimated the rate of underreported DSP, which is the most common method of attempted suicide in Iran. METHODS We estimated the rate and number of unaccounted individuals who attempted DSP in western Iran in 2015 using a truncated count model. In this method, the number of people who attempted DSP but were not referred to any health care centers, n0, was calculated through integrating hospital and forensic data. The crude and age-adjusted rates of attempted DSP were estimated directly using the average population size of the city of Kermanshah and the World Health Organization (WHO world standard population with and without accounting for underreporting. The Monte Carlo method was used to determine the confidence level. RESULTS The recorded number of people who attempted DSP was estimated by different methods to be in the range of 46.6 to 53.2% of the actual number of individuals who attempted DSP. The rate of underreported cases was higher among women than men and decreased as age increased. The rate of underreported cases decreased as the potency and intensity of toxic factors increased. The highest underreporting rates of 69.9, 51.2, and 21.5% were observed when oil and detergents (International Classification of Diseases, 10th revision [ICD-10] code: X66, medications (ICD-10 code: X60-X64, and agricultural toxins (ICD-10 codes: X68, X69 were used for poisoning, respectively. Crude rates, with and without accounting for underreporting, were estimated by the mixture method as 167.5 per 100,000 persons and 331.7 per 100,000 persons, respectively, which decreased to 129.8 per 100,000 persons and 253.1 per 100,000 persons after adjusting for age on the basis of the WHO world standard population. CONCLUSIONS Nearly half of individuals who attempted DSP were not referred to a hospital for

  2. Estimating the Backup Reaction Wheel Orientation Using Reaction Wheel Spin Rates Flight Telemetry from a Spacecraft

    Science.gov (United States)

    Rizvi, Farheen

    2013-01-01

    A report describes a model that estimates the orientation of the backup reaction wheel using the reaction wheel spin rates telemetry from a spacecraft. Attitude control via the reaction wheel assembly (RWA) onboard a spacecraft uses three reaction wheels (one wheel per axis) and a backup to accommodate any wheel degradation throughout the course of the mission. The spacecraft dynamics prediction depends upon the correct knowledge of the reaction wheel orientations. Thus, it is vital to determine the actual orientation of the reaction wheels such that the correct spacecraft dynamics can be predicted. The conservation of angular momentum is used to estimate the orientation of the backup reaction wheel from the prime and backup reaction wheel spin rates data. The method is applied in estimating the orientation of the backup wheel onboard the Cassini spacecraft. The flight telemetry from the March 2011 prime and backup RWA swap activity on Cassini is used to obtain the best estimate for the backup reaction wheel orientation.

  3. Optimized support vector regression for drilling rate of penetration estimation

    Science.gov (United States)

    Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa

    2015-12-01

    In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.

  4. Estimation of human core temperature from sequential heart rate observations

    International Nuclear Information System (INIS)

    Buller, Mark J; Tharion, William J; Cheuvront, Samuel N; Montain, Scott J; Kenefick, Robert W; Castellani, John; Latzka, William A; Hoyt, Reed W; Roberts, Warren S; Richter, Mark; Jenkins, Odest Chadwicke

    2013-01-01

    Core temperature (CT) in combination with heart rate (HR) can be a good indicator of impending heat exhaustion for occupations involving exposure to heat, heavy workloads, and wearing protective clothing. However, continuously measuring CT in an ambulatory environment is difficult. To address this problem we developed a model to estimate the time course of CT using a series of HR measurements as a leading indicator using a Kalman filter. The model was trained using data from 17 volunteers engaged in a 24 h military field exercise (air temperatures 24–36 °C, and 42%–97% relative humidity and CTs ranging from 36.0–40.0 °C). Validation data from laboratory and field studies (N = 83) encompassing various combinations of temperature, hydration, clothing, and acclimation state were examined using the Bland–Altman limits of agreement (LoA) method. We found our model had an overall bias of −0.03 ± 0.32 °C and that 95% of all CT estimates fall within ±0.63 °C (>52 000 total observations). While the model for estimating CT is not a replacement for direct measurement of CT (literature comparisons of esophageal and rectal methods average LoAs of ±0.58 °C) our results suggest it is accurate enough to provide practical indication of thermal work strain for use in the work place. (paper)

  5. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  6. Rate estimation in partially observed Markov jump processes with measurement errors

    OpenAIRE

    Amrein, Michael; Kuensch, Hans R.

    2010-01-01

    We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...

  7. Estimating implied rates of discount in healthcare decision-making.

    Science.gov (United States)

    West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J

    2003-01-01

    To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for

  8. Estimation of the production rate of bacteria in the rumen of buffalo calves

    International Nuclear Information System (INIS)

    Singh, U.B.; Verma, D.N.; Varma, A.; Ranjhan, S.K.

    1976-01-01

    The rate of bacterial cell growth in the rumen of buffalo calves has been measured applying the isotope dilution technique by injecting labelled mixed rumen bacteria into the rumen, and expressing the specific radioactivity either per mg dry bacterial cells or per μg DAPA in whole rumen samples. The animals were fed daily about 15-20 kg chopped green maize in 12 equal amounts at 2-h intervals. There was no significant difference in the rate of production of bacteria estimated by either method. (author)

  9. Estimated dose rates to members of the public from external exposure to patients with 131I thyroid treatment

    International Nuclear Information System (INIS)

    Dewji, S.; Bellamy, M.; Leggett, R.; Eckerman, K.; Hertel, N.; Sherbini, S.; Saba, M.

    2015-01-01

    Purpose: Estimated dose rates that may result from exposure to patients who had been administered iodine-131 ( 131 I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered 131 I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with 131 I. Tissue attenuation and iodine biokinetics were considered in the patient in a larger comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the 131 I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of 131 I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after 131 I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ specific activities of 131 I

  10. Gambling disorder: estimated prevalence rates and risk factors in Macao.

    Science.gov (United States)

    Wu, Anise M S; Lai, Mark H C; Tong, Kwok-Kit

    2014-12-01

    An excessive, problematic gambling pattern has been regarded as a mental disorder in the Diagnostic and Statistical Manual for Mental Disorders (DSM) for more than 3 decades (American Psychiatric Association [APA], 1980). In this study, its latest prevalence in Macao (one of very few cities with legalized gambling in China and the Far East) was estimated with 2 major changes in the diagnostic criteria, suggested by the 5th edition of DSM (APA, 2013): (a) removing the "Illegal Act" criterion, and (b) lowering the threshold for diagnosis. A random, representative sample of 1,018 Macao residents was surveyed with a phone poll design in January 2013. After the 2 changes were adopted, the present study showed that the estimated prevalence rate of gambling disorder was 2.1% of the Macao adult population. Moreover, the present findings also provided empirical support to the application of these 2 recommended changes when assessing symptoms of gambling disorder among Chinese community adults. Personal risk factors of gambling disorder, namely being male, having low education, a preference for casino gambling, as well as high materialism, were identified.

  11. Analysis of the wind data and estimation of the resultant air concentration rates

    International Nuclear Information System (INIS)

    Hu, Shze Jer; Katagiri, Hiroshi; Kobayashi, Hideo

    1988-09-01

    Statistical analyses and comparisons of the meteorological wind data obtained by the propeller and supersonic anemometers for the year of 1987 in the Japan Atomic Energy Research Institute, Tokai, were performed. For wind speeds less than 1 m/s, the propeller readings are generally 0.5 m/s less than those of the supersonic readings. The resultant average air concentration and ground level γ exposure rates due to the radioactive releases for the normal operation of a nuclear plant are over-estimated when calculated using the propeller wind data. As supersonic anemometer can give accurate wind speed to as low as 0.01 m/s, it should be used to measure the low wind speed. The difference in the average air concentrations and γ exposure rates calculated using the two different sets of wind data, is due to the influence of low wind speeds at calm. If the number at calm is large, actual low wind speeds and wind directions should be used in the statistical analysis of atmospheric dispersion to give a more accurate and realistic estimation of the air concentrations and γ exposure rates due to the normal operation of a nuclear plant. (author). 4 refs, 3 figs, 9 tabs

  12. Comparison of estimated and background subsidence rates in Texas-Louisiana geopressured geothermal areas

    Energy Technology Data Exchange (ETDEWEB)

    Lee, L.M.; Clayton, M.; Everingham, J.; Harding, R.C.; Massa, A.

    1982-06-01

    A comparison of background and potential geopressured geothermal development-related subsidence rates is given. Estimated potential geopressured-related rates at six prospects are presented. The effect of subsidence on the Texas-Louisiana Gulf Coast is examined including the various associated ground movements and the possible effects of these ground movements on surficial processes. The relationships between ecosystems and subsidence, including the capability of geologic and biologic systems to adapt to subsidence, are analyzed. The actual potential for environmental impact caused by potential geopressured-related subsidence at each of four prospects is addressed. (MHR)

  13. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  14. Estimation of adjusted rate differences using additive negative binomial regression.

    Science.gov (United States)

    Donoghoe, Mark W; Marschner, Ian C

    2016-08-15

    Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Estimation of human core temperature from sequential heart rate observations.

    Science.gov (United States)

    Buller, Mark J; Tharion, William J; Cheuvront, Samuel N; Montain, Scott J; Kenefick, Robert W; Castellani, John; Latzka, William A; Roberts, Warren S; Richter, Mark; Jenkins, Odest Chadwicke; Hoyt, Reed W

    2013-07-01

    Core temperature (CT) in combination with heart rate (HR) can be a good indicator of impending heat exhaustion for occupations involving exposure to heat, heavy workloads, and wearing protective clothing. However, continuously measuring CT in an ambulatory environment is difficult. To address this problem we developed a model to estimate the time course of CT using a series of HR measurements as a leading indicator using a Kalman filter. The model was trained using data from 17 volunteers engaged in a 24 h military field exercise (air temperatures 24-36 °C, and 42%-97% relative humidity and CTs ranging from 36.0-40.0 °C). Validation data from laboratory and field studies (N = 83) encompassing various combinations of temperature, hydration, clothing, and acclimation state were examined using the Bland-Altman limits of agreement (LoA) method. We found our model had an overall bias of -0.03 ± 0.32 °C and that 95% of all CT estimates fall within ±0.63 °C (>52 000 total observations). While the model for estimating CT is not a replacement for direct measurement of CT (literature comparisons of esophageal and rectal methods average LoAs of ±0.58 °C) our results suggest it is accurate enough to provide practical indication of thermal work strain for use in the work place.

  16. THE ESTIMATION OF STAR FORMATION RATES AND STELLAR POPULATION AGES OF HIGH-REDSHIFT GALAXIES FROM BROADBAND PHOTOMETRY

    International Nuclear Information System (INIS)

    Lee, Seong-Kook; Ferguson, Henry C.; Somerville, Rachel S.; Wiklind, Tommy; Giavalisco, Mauro

    2010-01-01

    We explore methods to improve the estimates of star formation rates and mean stellar population ages from broadband photometry of high-redshift star-forming galaxies. We use synthetic spectral templates with a variety of simple parametric star formation histories to fit broadband spectral energy distributions. These parametric models are used to infer ages, star formation rates, and stellar masses for a mock data set drawn from a hierarchical semi-analytic model of galaxy evolution. Traditional parametric models generally assume an exponentially declining rate of star formation after an initial instantaneous rise. Our results show that star formation histories with a much more gradual rise in the star formation rate are likely to be better templates, and are likely to give better overall estimates of the age distribution and star formation rate distribution of Lyman break galaxies (LBGs). For B- and V-dropouts, we find the best simple parametric model to be one where the star formation rate increases linearly with time. The exponentially declining model overpredicts the age by 100% and 120% for B- and V-dropouts, on average, while for a linearly increasing model, the age is overpredicted by 9% and 16%, respectively. Similarly, the exponential model underpredicts star formation rates by 56% and 60%, while the linearly increasing model underpredicts by 15% and 22%, respectively. For U-dropouts, the models where the star formation rate has a peak (near z ∼ 3) provide the best match for age-overprediction is reduced from 110% to 26%-and star formation rate-underprediction is reduced from 58% to 22%. We classify different types of star formation histories in the semi-analytic models and show how the biases behave for the different classes. We also provide two-band calibration formulae for stellar mass and star formation rate estimations.

  17. Estimating cumulative soil accumulation rates with in situ-produced cosmogenic nuclide depth profiles

    International Nuclear Information System (INIS)

    Phillips, William M.

    2000-01-01

    A numerical model relating spatially averaged rates of cumulative soil accumulation and hillslope erosion to cosmogenic nuclide distribution in depth profiles is presented. Model predictions are compared with cosmogenic 21 Ne and AMS radiocarbon data from soils of the Pajarito Plateau, New Mexico. Rates of soil accumulation and hillslope erosion estimated by cosmogenic 21 Ne are significantly lower than rates indicated by radiocarbon and regional soil-geomorphic studies. The low apparent cosmogenic erosion rates are artifacts of high nuclide inheritance in cumulative soil parent material produced from erosion of old soils on hillslopes. In addition, 21 Ne profiles produced under conditions of rapid accumulation (>0.1 cm/a) are difficult to distinguish from bioturbated soil profiles. Modeling indicates that while 10 Be profiles will share this problem, both bioturbation and anomalous inheritance can be identified with measurement of in situ-produced 14 C

  18. Advances in the GRADE approach to rate the certainty in estimates from a network meta-analysis.

    Science.gov (United States)

    Brignardello-Petersen, Romina; Bonner, Ashley; Alexander, Paul E; Siemieniuk, Reed A; Furukawa, Toshi A; Rochwerg, Bram; Hazlewood, Glen S; Alhazzani, Waleed; Mustafa, Reem A; Murad, M Hassan; Puhan, Milo A; Schünemann, Holger J; Guyatt, Gordon H

    2018-01-01

    This article describes conceptual advances of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) working group guidance to evaluate the certainty of evidence (confidence in evidence, quality of evidence) from network meta-analysis (NMA). Application of the original GRADE guidance, published in 2014, in a number of NMAs has resulted in advances that strengthen its conceptual basis and make the process more efficient. This guidance will be useful for systematic review authors who aim to assess the certainty of all pairwise comparisons from an NMA and who are familiar with the basic concepts of NMA and the traditional GRADE approach for pairwise meta-analysis. Two principles of the original GRADE NMA guidance are that we need to rate the certainty of the evidence for each pairwise comparison within a network separately and that in doing so we need to consider both the direct and indirect evidence. We present, discuss, and illustrate four conceptual advances: (1) consideration of imprecision is not necessary when rating the direct and indirect estimates to inform the rating of NMA estimates, (2) there is no need to rate the indirect evidence when the certainty of the direct evidence is high and the contribution of the direct evidence to the network estimate is at least as great as that of the indirect evidence, (3) we should not trust a statistical test of global incoherence of the network to assess incoherence at the pairwise comparison level, and (4) in the presence of incoherence between direct and indirect evidence, the certainty of the evidence of each estimate can help decide which estimate to believe. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. NAIRU estimates in a transitional economy with an extremely high unemployment rate: The case of the Republic of Macedonia

    Directory of Open Access Journals (Sweden)

    Trpeski Predrag

    2015-01-01

    Full Text Available The main goal of the paper is to estimate the NAIRU for the Macedonian economy and to discuss the applicability of this indicator. The paper provides time-varying estimates for the period 1998-2012, which are obtained using the Ball and Mankiw (2002 approach, supplemented with the iterative procedure proposed by Ball (2009. The results reveal that the Macedonian NAIRU has ahumpshaped path. The estimation is based on both the LFS unemployment rate and the LFS unemployment rate corrected for employment in the grey economy. The dynamics of the estimated NAIRU stress the ability of the NAIRU to present the cyclical misbalances in a national economy.

  20. Development of the town data base: Estimates of exposure rates and times of fallout arrival near the Nevada Test Site

    International Nuclear Information System (INIS)

    Thompson, C.B.; McArthur, R.D.; Hutchinson, S.W.

    1994-09-01

    As part of the U.S. Department of Energy's Off-Site Radiation Exposure Review Project, the time of fallout arrival and the H+12 exposure rate were estimated for populated locations in Arizona, California, Nevada, and Utah that were affected by fallout from one or more nuclear tests at the Nevada Test Site. Estimates of exposure rate were derived from measured values recorded before and after each test by fallout monitors in the field. The estimate for a given location was obtained by retrieving from a data base all measurements made in the vicinity, decay-correcting them to H+12, and calculating an average. Estimates were also derived from maps produced after most events that show isopleths of exposure rate and time of fallout arrival. Both sets of isopleths on these maps were digitized, and kriging was used to interpolate values at the nodes of a 10-km grid covering the pattern. The values at any location within the grid were then estimated from the values at the surrounding grid nodes. Estimates of dispersion (standard deviation) were also calculated. The Town Data Base contains the estimates for all combinations of location and nuclear event for which the estimated mean H+12 exposure rate was greater than three times background. A listing of the data base is included as an appendix. The information was used by other project task groups to estimate the radiation dose that off-site populations and individuals may have received as a result of exposure to fallout from Nevada nuclear tests

  1. Estimating the success rate of ovulation and early litter loss rate in the Japanese black bear (Ursus thibetanus japonicus) by examining the ovaries and uteri.

    Science.gov (United States)

    Yamanaka, Atsushi; Yamauchi, Kiyoshi; Tsujimoto, Tsunenori; Mizoguchi, Toshio; Oi, Toru; Sawada, Seigo; Shimozuru, Michito; Tsubota, Toshio

    2011-02-01

    In order to develop a method for estimating the success/failure rates of reproductive processes, especially those of ovulation and neonate nurturing, in the Japanese black bear (Ursus thibetanus japonicus), we examined offspring status, corpora lutea (CLs), placental scars (PSs) and corpora albicantia (CAs) in 159 females (0-23 years old) killed as nuisances on Honshu Island of Japan during 2001-2009. PSs were found to remain in the uterus at least until November of the year of parturition. CA detectability began to decline after September of the year of parturition. Monthly and age-specific proportions of CL-present females revealed that the post-mating season starts in August, and that the age of first ovulation is 4 years. These results indicate that the success rate of ovulation (SRO: the probability that solitary/non-lactating mature females actually succeed in ovulation) can be estimated by calculating the proportion of CL-present females among > or = 4-year-old females without PSs captured from August to November; the early litter loss rate (ELLR: the probability that parenting females lose all of their cubs [0-year-old offspring] before mating season) can be estimated by calculating the proportion of CL-present females among those with PSs and CAs captured in August or later. The estimated values of SRO and ELLR were 0.93 (62/67) and 0.27 (6/22), respectively.

  2. An estimation of finger-tapping rates and load capacities and the effects of various factors.

    Science.gov (United States)

    Ekşioğlu, Mahmut; İşeri, Ali

    2015-06-01

    The aim of this study was to estimate the finger-tapping rates and finger load capacities of eight fingers (excluding thumbs) for a healthy adult population and investigate the effects of various factors on tapping rate. Finger-tapping rate, the total number of finger taps per unit of time, can be used as a design parameter of various products and also as a psychomotor test for evaluating patients with neurologic problems. A 1-min tapping task was performed by 148 participants with maximum volitional tempo for each of eight fingers. For each of the tapping tasks, the participant with the corresponding finger tapped the associated key in the standard position on the home row of a conventional keyboard for touch typing. The index and middle fingers were the fastest fingers for both hands, and little fingers the slowest. All dominant-hand fingers, except little finger, had higher tapping rates than the fastest finger of the nondominant hand. Tapping rate decreased with age and smokers tapped faster than nonsmokers. Tapping duration and exercise had also significant effect on tapping rate. Normative data of tapping rates and load capacities of eight fingers were estimated for the adult population. In designs of psychomotor tests that require the use of tapping rate or finger load capacity data, the effects of finger, age, smoking, and tapping duration need to be taken into account. The findings can be used for ergonomic designs requiring finger-tapping capacity and also as a reference in psychomotor tests. © 2015, Human Factors and Ergonomics Society.

  3. A novel multitemporal insar model for joint estimation of deformation rates and orbital errors

    KAUST Repository

    Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai

    2014-01-01

    be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long

  4. Modeling in support of Corridor Resources Old Harry exploratory drilling environmental assessment

    International Nuclear Information System (INIS)

    2011-10-01

    During offshore petroleum activities, oil spills can occur and lead to significant environmental impacts. Corridor Resources Inc. is in the process of obtaining a license for exploratory drilling activities in the Old Harry and the aim of this study is to determine what would be the behavior and trajectory of any oil spill from these activities. Two types of spill were studied, sub-sea and surface spills. Modeling was carried out using Cohasset oil from the Scotian Basin, the properties of which are thought to be close to those of Old Harry oil, and the blowout rates were determined using reservoir information. Results showed that subsea blowouts would result in wide and thin surface slicks near the source while surface blowouts would be narrow and thick; surface slicks would persist over a 5km range from the source before dispersion.

  5. Another look at Alborz Nr. 5 in central Iran

    Energy Technology Data Exchange (ETDEWEB)

    Gretener, P.E.

    1982-04-01

    In 1956 one of the biggest blow-outs in the history of the oil industry occurred in central Iran on the Alborz structure. The well blew oil and large quantities of gas at an average rate of 60,000 bpd for 82 days for a total production in excess of 5 million bbl. The drill only nicked the reservoir, with a penetration of 2 in. Good data have been published on this event which makes this blow-out almost unique. The observations indicate that the reservoir is very highly overpressured, that the reservoir rock must have enormous permeability, and the length of the blow-out shows that rapid pressure depletion is not a problem. This indicates that Alborz is a commercial field in a high pressure environment, contrary to the widely held opinion that high pressure reservoirs are noncommercial. 10 references.

  6. Use of virtual reality to estimate radiation dose rates in nuclear plants

    International Nuclear Information System (INIS)

    Augusto, Silas C.; Mol, Antonio C.A.; Jorge, Carlos A.F.; Couto, Pedro M.

    2007-01-01

    Operators in nuclear plants receive radiation doses during several different operation procedures. A training program capable of simulating these operation scenarios will be useful in several ways, helping the planning of operational procedures so as to reduce the doses received by workers, and to minimize operations' times. It can provide safe virtual operation training, visualization of radiation dose rates, and estimation of doses received by workers. Thus, a virtual reality application, a free game engine, has been adapted to achieve the goals of this project. Simulation results for Argonauta research reactor of Instituto de Engenharia Nuclear are shown in this paper. A database of dose rate measurements, previously performed by the radiological protection service, has been used to display the dose rate distribution in the region of interest. The application enables the user to walk in the virtual scenario, displaying at all times the dose accumulated by the avatar. (author)

  7. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation

    DEFF Research Database (Denmark)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed

    2017-01-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented...... is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared...

  8. Ceilometer-based Rainfall Rate estimates in the framework of VORTEX-SE campaign: A discussion

    Science.gov (United States)

    Barragan, Ruben; Rocadenbosch, Francesc; Waldinger, Joseph; Frasier, Stephen; Turner, Dave; Dawson, Daniel; Tanamachi, Robin

    2017-04-01

    During Spring 2016 the first season of the Verification of the Origins of Rotation in Tornadoes EXperiment-Southeast (VORTEX-SE) was conducted in the Huntsville, AL environs. Foci of VORTEX-SE include the characterization of the tornadic environments specific to the Southeast US as well as societal response to forecasts and warnings. Among several experiments, a research team from Purdue University and from the University of Massachusetts Amherst deployed a mobile S-band Frequency-Modulated Continuous-Wave (FMCW) radar and a co-located Vaisala CL31 ceilometer for a period of eight weeks near Belle Mina, AL. Portable disdrometers (DSDs) were also deployed in the same area by Purdue University, occasionally co-located with the radar and lidar. The NOAA National Severe Storms Laboratory also deployed the Collaborative Lower Atmosphere Mobile Profiling System (CLAMPS) consisting of a Doppler lidar, a microwave radiometer, and an infrared spectrometer. The purpose of these profiling instruments was to characterize the atmospheric boundary layer evolution over the course of the experiment. In this paper we focus on the lidar-based retrieval of rainfall rate (RR) and its limitations using observations from intensive observation periods during the experiment: 31 March and 29 April 2016. Departing from Lewandowski et al., 2009, the RR was estimated by the Vaisala CL31 ceilometer applying the slope method (Kunz and Leeuw, 1993) to invert the extinction caused by the rain. Extinction retrievals are fitted against RR estimates from the disdrometer in order to derive a correlation model that allows us to estimate the RR from the ceilometer in similar situations without a disdrometer permanently deployed. The problem of extinction retrieval is also studied from the perspective of Klett-Fernald-Sasano's (KFS) lidar inversion algorithm (Klett, 1981; 1985), which requires the assumption of an aerosol extinction-to-backscatter ratio (the so-called lidar ratio) and calibration in a

  9. Aspartic acid racemization rate in narwhal (Monodon monoceros) eye lens nuclei estimated by counting of growth layers in tusks

    DEFF Research Database (Denmark)

    Garde, Eva; Heide-Jørgensen, Mads Peter; Ditlevsen, Susanne

    2012-01-01

    Ages of marine mammals have traditionally been estimated by counting dentinal growth layers in teeth. However, this method is difficult to use on narwhals (Monodon monoceros) because of their special tooth structures. Alternative methods are therefore needed. The aspartic acid racemization (AAR......) technique has been used in age estimation studies of cetaceans, including narwhals. The purpose of this study was to estimate a species-specific racemization rate for narwhals by regressing aspartic acid D/L ratios in eye lens nuclei against growth layer groups in tusks (n=9). Two racemization rates were...

  10. A practical way to estimate retail tobacco sales violation rates more accurately.

    Science.gov (United States)

    Levinson, Arnold H; Patnaik, Jennifer L

    2013-11-01

    U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.

  11. A comparison of estimated glomerular filtration rates using Cockcroft-Gault and the Chronic Kidney Disease Epidemiology Collaboration estimating equations in HIV infection

    DEFF Research Database (Denmark)

    Mocroft, A; Nielsen, Lene Ryom; Reiss, P

    2014-01-01

    The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end...

  12. Bioavailability of contaminants estimated from uptake rates into soil invertebrates

    International Nuclear Information System (INIS)

    Straalen, N.M. van; Donker, M.H.; Vijver, M.G.; Gestel, C.A.M. van

    2005-01-01

    It is often argued that the concentration of a pollutant inside an organism is a good indicator of its bioavailability, however, we show that the rate of uptake, not the concentration itself, is the superior predictor. In a study on zinc accumulation and toxicity to isopods (Porcellio scaber) the dietary EC 50 for the effect on body growth was rather constant and reproducible, while the internal EC 50 varied depending on the accumulation history of the animals. From the data a critical value for zinc accumulation in P. scaber was estimated as 53 μg/g/wk. We review toxicokinetic models applicable to time-series measurements of concentrations in invertebrates. The initial slope of the uptake curve is proposed as an indicator of bioavailability. To apply the dynamic concept of bioavailability in risk assessment, a set of representative organisms should be chosen and standardized protocols developed for exposure assays by which suspect soils can be evaluated. - Sublethal toxicity of zinc to isopods suggests that bioavailability of soil contaminants is best measured by uptake rates, not by body burdens

  13. Dynamic temperature estimation and real time emergency rating of transmission cables

    DEFF Research Database (Denmark)

    Olsen, R. S.; Holboll, J.; Gudmundsdottir, Unnur Stella

    2012-01-01

    enables real time emergency ratings, such that the transmission system operator can make well-founded decisions during faults. Hereunder is included the capability of producing high resolution loadability vs. time schedules within few minutes, such that the TSO can safely control the system.......). It is found that the calculated temperature estimations are fairly accurate — within 1.5oC of the finite element method (FEM) simulation to which it is compared — both when looking at the temperature profile (time dependent) and the temperature distribution (geometric dependent). The methodology moreover...

  14. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates

    Science.gov (United States)

    Schwindt, Adam R.; Winkelman, Dana L.

    2016-01-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  15. Estimated dose rates to members of the public from external exposure to patients with {sup 131}I thyroid treatment

    Energy Technology Data Exchange (ETDEWEB)

    Dewji, S., E-mail: dewjisa@ornl.gov; Bellamy, M.; Leggett, R.; Eckerman, K. [Oak Ridge National Laboratory, 1 Bethel Valley Road, MS-6335, Oak Ridge, Tennessee 37831 (United States); Hertel, N. [Oak Ridge National Laboratory, 1 Bethel Valley Road, MS-6335, Oak Ridge, Tennessee 37831 and Georgia Institute of Technology, 770 State Street, Atlanta, Georgia 30332-0745 (United States); Sherbini, S.; Saba, M. [United States Nuclear Regulatory Commission, Washington, DC 20555-0001 (United States)

    2015-04-15

    Purpose: Estimated dose rates that may result from exposure to patients who had been administered iodine-131 ({sup 131}I) as part of medical therapy were calculated. These effective dose rate estimates were compared with simplified assumptions under United States Nuclear Regulatory Commission Regulatory Guide 8.39, which does not consider body tissue attenuation nor time-dependent redistribution and excretion of the administered {sup 131}I. Methods: Dose rates were estimated for members of the public potentially exposed to external irradiation from patients recently treated with {sup 131}I. Tissue attenuation and iodine biokinetics were considered in the patient in a larger comprehensive effort to improve external dose rate estimates. The external dose rate estimates are based on Monte Carlo simulations using the Phantom with Movable Arms and Legs (PIMAL), previously developed by Oak Ridge National Laboratory and the United States Nuclear Regulatory Commission. PIMAL was employed to model the relative positions of the {sup 131}I patient and members of the public in three exposure scenarios: (1) traveling on a bus in a total of six seated or standing permutations, (2) two nursing home cases where a caregiver is seated at 30 cm from the patient’s bedside and a nursing home resident seated 250 cm away from the patient in an adjacent bed, and (3) two hotel cases where the patient and a guest are in adjacent rooms with beds on opposite sides of the common wall, with the patient and guest both in bed and either seated back-to-back or lying head to head. The biokinetic model predictions of the retention and distribution of {sup 131}I in the patient assumed a single voiding of urinary bladder contents that occurred during the trip at 2, 4, or 8 h after {sup 131}I administration for the public transportation cases, continuous first-order voiding for the nursing home cases, and regular periodic voiding at 4, 8, or 12 h after administration for the hotel room cases. Organ

  16. Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies

    Science.gov (United States)

    Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba

    2018-05-01

    photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.

  17. Recursive Estimation for Dynamical Systems with Different Delay Rates Sensor Network and Autocorrelated Process Noises

    Directory of Open Access Journals (Sweden)

    Jianxin Feng

    2014-01-01

    Full Text Available The recursive estimation problem is studied for a class of uncertain dynamical systems with different delay rates sensor network and autocorrelated process noises. The process noises are assumed to be autocorrelated across time and the autocorrelation property is described by the covariances between different time instants. The system model under consideration is subject to multiplicative noises or stochastic uncertainties. The sensor delay phenomenon occurs in a random way and each sensor in the sensor network has an individual delay rate which is characterized by a binary switching sequence obeying a conditional probability distribution. By using the orthogonal projection theorem and an innovation analysis approach, the desired recursive robust estimators including recursive robust filter, predictor, and smoother are obtained. Simulation results are provided to demonstrate the effectiveness of the proposed approaches.

  18. Can the cerebral metabolic rate of oxygen be estimated with near-infrared spectroscopy?

    International Nuclear Information System (INIS)

    Boas, D A; Strangman, G; Culver, J P; Hoge, R D; Jasdzewski, G; Poldrack, R A; Rosen, B R; Mandeville, J B

    2003-01-01

    We have measured the changes in oxy-haemoglobin and deoxy-haemoglobin in the adult human brain during a brief finger tapping exercise using near-infrared spectroscopy (NIRS). The cerebral metabolic rate of oxygen (CMRO 2 ) can be estimated from these NIRS data provided certain model assumptions. The change in CMRO 2 is related to changes in the total haemoglobin concentration, deoxy-haemoglobin concentration and blood flow. As NIRS does not provide a measure of dynamic changes in blood flow during brain activation, we relied on a Windkessel model that relates dynamic blood volume and flow changes, which has been used previously for estimating CMRO 2 from functional magnetic resonance imaging (fMRI) data. Because of the partial volume effect we are unable to quantify the absolute changes in the local brain haemoglobin concentrations with NIRS and thus are unable to obtain an estimate of the absolute CMRO 2 change. An absolute estimate is also confounded by uncertainty in the flow-volume relationship. However, the ratio of the flow change to the CMRO 2 change is relatively insensitive to these uncertainties. For the finger tapping task, we estimate a most probable flow-consumption ratio ranging from 1.5 to 3 in agreement with previous findings presented in the literature, although we cannot exclude the possibility that there is no CMRO 2 change. The large range in the ratio arises from the large number of model parameters that must be estimated from the data. A more precise estimate of the flow-consumption ratio will require better estimates of the model parameters or flow information, as can be provided by combining NIRS with fMRI

  19. An assessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram.

    Science.gov (United States)

    Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J

    2016-04-01

    Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of  -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and  -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of  -5.6 to 5.2 bpm and a bias of  -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.

  20. Demographic aspects of Chrysomya megacephala (Diptera, Calliphoridae) adults maintained under experimental conditions: reproductive rate estimates

    OpenAIRE

    Carvalho, Marcelo Henrique de; Von Zuben, Claudio José

    2006-01-01

    The objective of this work was to evaluate some aspects of the populational ecology of Chrysomya megacephala, analyzing demographic aspects of adults kept under experimental conditions. Cages of C. megacephala adults were prepared with four different larval densities (100, 200, 400 and 800). For each cage, two tables were made: one with demographic parameters for the life expectancy estimate at the initial age (e0), and another with the reproductive rate and average reproduction age estimates...

  1. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    OpenAIRE

    Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong

    2017-01-01

    In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or re...

  2. Estimating the spread rate of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer using fluorescence

    Science.gov (United States)

    Toni Antikainen; Anti Rohumaa; Christopher G. Hunt; Mari Levirinne; Mark Hughes

    2015-01-01

    In plywood production, human operators find it difficult to precisely monitor the spread rate of adhesive in real-time. In this study, macroscopic fluorescence was used to estimate spread rate (SR) of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer. This method could be an option when developing automated real-time SR measurement for...

  3. Estimating detection rates for the LIGO-Virgo search for gravitational-wave burst counterparts to gamma-ray bursts using inferred local GRB rates

    International Nuclear Information System (INIS)

    Leonor, I; Frey, R; Sutton, P J; Jones, G; Marka, S; Marka, Z

    2009-01-01

    One of the ongoing searches performed using the LIGO-Virgo network of gravitational-wave interferometers is the search for gravitational-wave burst (GWB) counterparts to gamma-ray bursts (GRBs). This type of analysis makes use of GRB time and position information from gamma-ray satellite detectors to trigger the GWB search, and the GWB detection rates possible for such an analysis thus strongly depend on the GRB detection efficiencies of the satellite detectors. Using local GRB rate densities inferred from observations which are found in the science literature, we calculate estimates of the GWB detection rates for different configurations of the LIGO-Virgo network for this type of analysis.

  4. Estimation of uptake rate constants for PCB congeners accumulated by semipermeable membrane devices and brown treat (Salmo trutta)

    Science.gov (United States)

    Meadows, J.C.; Echols, K.R.; Huckins, J.N.; Borsuk, F.A.; Carline, R.F.; Tillitt, D.E.

    1998-01-01

    The triolein-filled semipermeable membrane device (SPMD) is a simple and effective method of assessing the presence of waterborne hydrophobic chemicals. Uptake rate constants for individual chemicals are needed to accurately relate the amounts of chemicals accumulated by the SPMD to dissolved water concentrations. Brown trout and SPMDs were exposed to PCB- contaminated groundwater in a spring for 28 days to calculate and compare uptake rates of specific PCB congeners by the two matrixes. Total PCB congener concentrations in water samples from the spring were assessed and corrected for estimated total organic carbon (TOC) sorption to estimate total dissolved concentrations. Whole and dissolved concentrations averaged 4.9 and 3.7 ??g/L, respectively, during the exposure. Total concentrations of PCBs in fish rose from 0.06 to 118.3 ??g/g during the 28-day exposure, while concentrations in the SPMD rose from 0.03 to 203.4 ??g/ g. Uptake rate constants (k1) estimated for SPMDs and brown trout were very similar, with k1 values for SPMDs ranging from one to two times those of the fish. The pattern of congener uptake by the fish and SPMDs was also similar. The rates of uptake generally increased or decreased with increasing K(ow), depending on the assumption of presence or absence of TOC.The triolein-filled semipermeable membrane device (SPMD) is a simple and effective method of assessing the presence of waterborne hydrophobic chemicals. Uptake rate constants for individual chemicals are needed to accurately relate the amounts of chemicals accumulated by the SPMB to dissolved water concentrations. Brown trout and SPMDs were exposed to PCB-contaminated groundwater in a spring for 28 days to calculate and compare uptake rates of specific PCB congeners by the two matrixes. Total PCB congener concentrations in water samples from the spring were assessed and corrected for estimated total organic carbon (TOC) sorption to estimate total dissolved concentrations. Whole and

  5. Plutonium Discharge Rates and Spent Nuclear Fuel Inventory Estimates for Nuclear Reactors Worldwide

    Energy Technology Data Exchange (ETDEWEB)

    Brian K. Castle; Shauna A. Hoiland; Richard A. Rankin; James W. Sterbentz

    2012-09-01

    This report presents a preliminary survey and analysis of the five primary types of commercial nuclear power reactors currently in use around the world. Plutonium mass discharge rates from the reactors’ spent fuel at reload are estimated based on a simple methodology that is able to use limited reactor burnup and operational characteristics collected from a variety of public domain sources. Selected commercial reactor operating and nuclear core characteristics are also given for each reactor type. In addition to the worldwide commercial reactors survey, a materials test reactor survey was conducted to identify reactors of this type with a significant core power rating. Over 100 material or research reactors with a core power rating >1 MW fall into this category. Fuel characteristics and spent fuel inventories for these material test reactors are also provided herein.

  6. Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter

    International Nuclear Information System (INIS)

    Wirthlin, M J; Harding, A; Takai, H

    2014-01-01

    This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory

  7. Evaluation of Estimating Missed Answers in Conners Adult ADHD Rating Scale (Screening Version)

    Science.gov (United States)

    Ghassemi, Farnaz; Moradi, Mohammad Hassan; Tehrani-Doost, Mehdi

    2010-01-01

    Objective Conners Adult ADHD Rating Scale (CAARS) is among the valid questionnaires for evaluating Attention-Deficit/Hyperactivity Disorder in adults. The aim of this paper is to evaluate the validity of the estimation of missed answers in scoring the screening version of the Conners questionnaire, and to extract its principal components. Method This study was performed on 400 participants. Answer estimation was calculated for each question (assuming the answer was missed), and then a Kruskal-Wallis test was performed to evaluate the difference between the original answer and its estimation. In the next step, principal components of the questionnaire were extracted by means of Principal Component Analysis (PCA). Finally the evaluation of differences in the whole groups was provided using the Multiple Comparison Procedure (MCP). Results Findings indicated that a significant difference existed between the original and estimated answers for some particular questions. However, the results of MCP showed that this estimation, when evaluated in the whole group, did not show a significant difference with the original value in neither of the questionnaire subscales. The results of PCA revealed that there are eight principal components in the CAARS questionnaire. Conclusion The obtained results can emphasize the fact that this questionnaire is mainly designed for screening purposes, and this estimation does not change the results of groups when a question is missed randomly. Notwithstanding this finding, more considerations should be paid when the missed question is a critical one. PMID:22952502

  8. Evaluation of Estimating Missed Answers in Conners Adult ADHD Rating Scale (Screening Version

    Directory of Open Access Journals (Sweden)

    Vahid Abootalebi

    2010-08-01

    Full Text Available "n Objective: Conners Adult ADHD Rating Scale (CAARS is among the valid questionnaires for evaluating Attention-Deficit/Hyperactivity Disorder in adults. The aim of this paper is to evaluate the validity of the estimation of missed answers in scoring the screening version of the Conners questionnaire, and to extract its principal components. "n Method: This study was performed on 400 participants. Answer estimation was calculated for each question (assuming the answer was missed, and then a Kruskal-Wallis test was performed to evaluate the difference between the original answer and its estimation. In the next step, principal components of the questionnaire were extracted by means of Principal Component Analysis (PCA. Finally the evaluation of differences in the whole groups was provided using the Multiple Comparison Procedure (MCP. Results: Findings indicated that a significant difference existed between the original and estimated answers for some particular questions. However, the results of MCP showed that this estimation, when evaluated in the whole group, did not show a significant difference with the original value in neither of the questionnaire subscales. The results of PCA revealed that there are eight principal components in the CAARS questionnaire. Conclusion: The obtained results can emphasize the fact that this questionnaire is mainly designed for screening purposes, and this estimation does not change the results of groups when a question is missed randomly. Notwithstanding this finding, more considerations should be paid when the missed question is a critical one.

  9. Hydrocarbon composition and concentrations in the Gulf of Mexico sediments in the 3 years following the Macondo well blowout.

    Science.gov (United States)

    Babcock-Adams, Lydia; Chanton, Jeffrey P; Joye, Samantha B; Medeiros, Patricia M

    2017-10-01

    In April of 2010, the Macondo well blowout in the northern Gulf of Mexico resulted in an unprecedented release of oil into the water column at a depth of approximately 1500 m. A time series of surface and subsurface sediment samples were collected to the northwest of the well from 2010 to 2013 for molecular biomarker and bulk carbon isotopic analyses. While no clear trend was observed in subsurface sediments, surface sediments (0-3 cm) showed a clear pattern with total concentrations of n-alkanes, unresolved complex mixture (UCM), and petroleum biomarkers (terpanes, hopanes, steranes) increasing from May to September 2010, peaking in late November 2010, and strongly decreasing in the subsequent years. The peak in hydrocarbon concentrations were corroborated by higher organic carbon contents, more depleted Δ 14 C values and biomarker ratios similar to those of the initial MC252 crude oil reported in the literature. These results indicate that at least part of oil discharged from the accident sedimented to the seafloor in subsequent months, resulting in an apparent accumulation of hydrocarbons on the seabed by the end of 2010. Sediment resuspension and transport or biodegradation may account for the decrease in sedimented oil quantities in the years following the Macondo well spill. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Automating and estimating glomerular filtration rate for dosing medications and staging chronic kidney disease

    Directory of Open Access Journals (Sweden)

    Trinkley KE

    2014-05-01

    Full Text Available Katy E Trinkley,1 S Michelle Nikels,2 Robert L Page II,1 Melanie S Joy11Skaggs School of Pharmacy and Pharmaceutical Sciences, 2School of Medicine, University of Colorado, Aurora, CO, USA Objective: The purpose of this paper is to serve as a review for primary care providers on the bedside methods for estimating glomerular filtration rate (GFR for dosing and chronic kidney disease (CKD staging and to discuss how automated health information technologies (HIT can enhance clinical documentation of staging and reduce medication errors in patients with CKD.Methods: A nonsystematic search of PubMed (through March 2013 was conducted to determine the optimal approach to estimate GFR for dosing and CKD staging and to identify examples of how automated HITs can improve health outcomes in patients with CKD. Papers known to the authors were included, as were scientific statements. Articles were chosen based on the judgment of the authors.Results: Drug-dosing decisions should be based on the method used in the published studies and package labeling that have been determined to be safe, which is most often the Cockcroft–Gault formula unadjusted for body weight. Although Modification of Diet in Renal Disease is more commonly used in practice for staging, the CKD–Epidemiology Collaboration (CKD–EPI equation is the most accurate formula for estimating the CKD staging, especially at higher GFR values. Automated HITs offer a solution to the complexity of determining which equation to use for a given clinical scenario. HITs can educate providers on which formula to use and how to apply the formula in a given clinical situation, ultimately improving appropriate medication and medical management in CKD patients.Conclusion: Appropriate estimation of GFR is key to optimal health outcomes. HITs assist clinicians in both choosing the most appropriate GFR estimation formula and in applying the results of the GFR estimation in practice. Key limitations of the

  11. Using "1"3"7Cs measurements to estimate soil erosion rates in the Pčinja and South Morava River Basins, southeastern Serbia

    International Nuclear Information System (INIS)

    Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond

    2016-01-01

    The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using "1"3"7Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for "1"3"7Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the "1"3"7Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. - Highlights: • The "1"3"7Cs measurements are employed to estimate the soil erosion and deposition rates in southeastern Serbia. • Estimates of annual soil loss by profile distribution (PD) and diffusion and migration (D&M) models differ significantly. • Differences were ascribed to the assumptions of the simpler PD model which cause it to overestimate rates of soil loss. • The study confirmed the potential for using "1"3"7Cs measurements to estimate soil erosion rates in Serbia.

  12. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  13. Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters.

    Directory of Open Access Journals (Sweden)

    Kwang-Ming Liu

    Full Text Available The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ' and the vital parameters, maximum age (Tmax, age at maturity (Tm, annual fecundity (f/Rc, size at birth (Lb, size at maturity (Lm, and asymptotic length (L∞. Group (1 included species with slow growth rates (0.034 yr(-1 < k < 0.103 yr(-1 and extended longevity (26 yr < Tmax < 81 yr, e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2 included species with fast growth rates (0.103 yr(-1 < k < 0.358 yr(-1 and short longevity (9 yr < Tmax < 26 yr, e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3 included late maturing species (Lm/L∞ ≧ 0.75 with moderate longevity (Tmax < 29 yr, e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ' values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures.

  14. Exponential convergence rate estimation for uncertain delayed neural networks of neutral type

    International Nuclear Information System (INIS)

    Lien, C.-H.; Yu, K.-W.; Lin, Y.-F.; Chung, Y.-J.; Chung, L.-Y.

    2009-01-01

    The global exponential stability for a class of uncertain delayed neural networks (DNNs) of neutral type is investigated in this paper. Delay-dependent and delay-independent criteria are proposed to guarantee the robust stability of DNNs via LMI and Razumikhin-like approaches. For a given delay, the maximal allowable exponential convergence rate will be estimated. Some numerical examples are given to illustrate the effectiveness of our results. The simulation results reveal significant improvement over the recent results.

  15. Sedimentation rate estimates in Sorsogon Bay, Philippines using 210Pb method

    International Nuclear Information System (INIS)

    Madrid, Jordan F.; Sta. Maria, Efren J.; Olivares, Ryan U.; Aniago, Ryan Joseph; Asa Anie Day DC; Dayaon, Jennyvi P.; Bulos, Adelina DM; Sombrito, Elvira Z.

    2011-01-01

    Sorsogon Bay has experienced a long history of recurring harmful algal blooms over the past few years. In an attempt to establish a chronology of events in the sediment layer, lead-210 ( 210 Pb) dating method has been utilized in estimating sedimentation rates from three selected areas along the bay. Based on the unsupported 210 Pb data and by applying the Constant Initial Concentration (CIC) model, the calculated sedimentation rates were 0.8, 1.3 and 1.8 cm yr 1 for sediment cores collected near the coastal areas of Castilla (SO-01), Sorsogon City (SO-07) and Cadacan River (SO-03), respectively. High sedimentation rates were measured in sediment cores believed to be affected from frequent volcanic ash releases and from areas near human settlement combined with intensive farming and agricultural activities. The collected sediments exhibited non-uniform down core values of dry bulk density and moisture content. This variation in measurements may indicate the general quality and composition of the sediment samples, i.e., amount of organic matter and grain size. The calculated sedimentation rates obtained provided an overview of the sedimentation processes and reflect the land use pattern around the bay which may help in understanding the history and distribution of materials and nutrient input relative to the occurrence of harmful algal bloom in the sediment columns. (author)

  16. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  17. Genome-Wide Estimates of Transposable Element Insertion and Deletion Rates in Drosophila Melanogaster

    Science.gov (United States)

    Adrion, Jeffrey R.; Song, Michael J.; Schrider, Daniel R.; Hahn, Matthew W.

    2017-01-01

    Abstract Knowing the rate at which transposable elements (TEs) insert and delete is critical for understanding their role in genome evolution. We estimated spontaneous rates of insertion and deletion for all known, active TE superfamilies present in a set of Drosophila melanogaster mutation-accumulation (MA) lines using whole genome sequence data. Our results demonstrate that TE insertions far outpace TE deletions in D. melanogaster. We found a significant effect of background genotype on TE activity, with higher rates of insertions in one MA line. We also found significant rate heterogeneity between the chromosomes, with both insertion and deletion rates elevated on the X relative to the autosomes. Further, we identified significant associations between TE activity and chromatin state, and tested for associations between TE activity and other features of the local genomic environment such as TE content, exon content, GC content, and recombination rate. Our results provide the most detailed assessment of TE mobility in any organism to date, and provide a useful benchmark for both addressing theoretical predictions of TE dynamics and for exploring large-scale patterns of TE movement in D. melanogaster and other species. PMID:28338986

  18. State estimation for Markov-type genetic regulatory networks with delays and uncertain mode transition rates

    International Nuclear Information System (INIS)

    Liang Jinling; Lam, James; Wang Zidong

    2009-01-01

    This Letter is concerned with the robust state estimation problem for uncertain time-delay Markovian jumping genetic regulatory networks (GRNs) with SUM logic, where the uncertainties enter into both the network parameters and the mode transition rate. The nonlinear functions describing the feedback regulation are assumed to satisfy the sector-like conditions. The main purpose of the problem addressed is to design a linear estimator to approximate the true concentrations of the mRNA and protein through available measurement outputs. By resorting to the Lyapunov functional method and some stochastic analysis tools, it is shown that if a set of linear matrix inequalities (LMIs) is feasible, the desired state estimator, that can ensure the estimation error dynamics to be globally robustly asymptotically stable in the mean square, exists. The obtained LMI conditions are dependent on both the lower and the upper bounds of the delays. An illustrative example is presented to demonstrate the feasibility of the proposed estimation schemes.

  19. Estimation of the growth curve and heritability of the growth rate for giant panda (Ailuropoda melanoleuca) cubs.

    Science.gov (United States)

    Che, T D; Wang, C D; Jin, L; Wei, M; Wu, K; Zhang, Y H; Zhang, H M; Li, D S

    2015-03-27

    Giant panda cubs have a low survival rate during the newborn and early growth stages. However, the growth and developmental parameters of giant panda cubs during the early lactation stage (from birth to 6 months) are not well known. We examined the growth and development of giant panda cubs by the Chapman growth curve model and estimated the heritability of the maximum growth rate at the early lactation stage. We found that 83 giant panda cubs reached their maximum growth rate at approximately 75-120 days after birth. The body weight of cubs at 75 days was 4285.99 g. Furthermore, we estimated that the heritability of the maximum growth rate was moderate (h(2) = 0.38). Our study describes the growth and development of giant panda cubs at the early lactation stage and provides valuable growth benchmarks. We anticipate that our results will be a starting point for more detailed research on increasing the survival rate of giant panda cubs. Feeding programs for giant panda cubs need further improvement.

  20. Strain rates estimated by geodetic observations in the Borborema Province, Brazil

    Science.gov (United States)

    Marotta, Giuliano Sant'Anna; França, George Sand; Monico, João Francisco Galera; Bezerra, Francisco Hilário R.; Fuck, Reinhardt Adolfo

    2015-03-01

    The strain rates for the Borborema Province, located in northeastern Brazil, were estimated in this study. For this purpose, we used GNSS tracking stations with a minimum of two years data. The data were processed using the software GIPSY, version 6.2, provided by the JPL of the California Institute of Technology. The PPP method was used to process the data using the non-fiducial approach. Satellite orbits and clock were supplied by the JPL. Absolute phase center offsets and variations for both the receiver and the satellite antennaes were applied, together with ambiguity resolution; corrections of the first and second order effects of the ionosphere and troposphere models adopting the VMF1 mapping function; 10° elevation mask; FES2004 oceanic load model and terrestrial tide WahrK1 PolTid FreqDepLove OctTid. From a multi annual solution, involving at least 2 years of continuous data, the coordinates and velocities as well as their accuracies were estimated. The strain rates were calculated using the Delaunay triangulation and the Finite Element Method. The results show that the velocity direction is predominantly west and north, with maximum variation of 4.0 ± 1.5 mm/year and 4.1 ± 0.5 mm/year for the x and y components, respectively. The highest strain values of extension and contraction were 0.109552 × 10-6 ± 3.65 × 10-10/year and -0.072838 × 10-6 ± 2.32 × 10-10/year, respectively. In general, the results show that the highest strain and variation of velocity values are located close to the Potiguar Basin, region that concentrates seismic activities of magnitudes of up to 5.2 mb. We conclude that the contraction direction of strain is consistent with the maximum horizontal stress derived from focal mechanism and breakout data. In addition, we conclude that the largest strain rates occur around the Potiguar Basin, an area already recognized as one of the major sites of seismicity in intraplate South America.

  1. Estimating the incidence reporting rates of new influenza pandemics at an early stage using travel data from the source country.

    Science.gov (United States)

    Chong, K C; Fong, H F; Zee, C Y

    2014-05-01

    During the surveillance of influenza pandemics, underreported data are a public health challenge that complicates the understanding of pandemic threats and can undermine mitigation efforts. We propose a method to estimate incidence reporting rates at early stages of new influenza pandemics using 2009 pandemic H1N1 as an example. Routine surveillance data and statistics of travellers arriving from Mexico were used. Our method incorporates changes in reporting rates such as linearly increasing trends due to the enhanced surveillance. From our results, the reporting rate was estimated at 0·46% during early stages of the pandemic in Mexico. We estimated cumulative incidence in the Mexican population to be 0·7% compared to 0·003% reported by officials in Mexico at the end of April. This method could be useful in estimation of actual cases during new influenza pandemics for policy makers to better determine appropriate control measures.

  2. The Greenville Fault: preliminary estimates of its long-term creep rate and seismic potential

    Science.gov (United States)

    Lienkaemper, James J.; Barry, Robert G.; Smith, Forrest E.; Mello, Joseph D.; McFarland, Forrest S.

    2013-01-01

    Once assumed locked, we show that the northern third of the Greenville fault (GF) creeps at 2 mm/yr, based on 47 yr of trilateration net data. This northern GF creep rate equals its 11-ka slip rate, suggesting a low strain accumulation rate. In 1980, the GF, easternmost strand of the San Andreas fault system east of San Francisco Bay, produced a Mw5.8 earthquake with a 6-km surface rupture and dextral slip growing to ≥2 cm on cracks over a few weeks. Trilateration shows a 10-cm post-1980 transient slip ending in 1984. Analysis of 2000-2012 crustal velocities on continuous global positioning system stations, allows creep rates of ~2 mm/yr on the northern GF, 0-1 mm/yr on the central GF, and ~0 mm/yr on its southern third. Modeled depth ranges of creep along the GF allow 5-25% aseismic release. Greater locking in the southern two thirds of the GF is consistent with paleoseismic evidence there for large late Holocene ruptures. Because the GF lacks large (>1 km) discontinuities likely to arrest higher (~1 m) slip ruptures, we expect full-length (54-km) ruptures to occur that include the northern creeping zone. We estimate sufficient strain accumulation on the entire GF to produce Mw6.9 earthquakes with a mean recurrence of ~575 yr. While the creeping 16-km northern part has the potential to produce a Mw6.2 event in 240 yr, it may rupture in both moderate (1980) and large events. These two-dimensional-model estimates of creep rate along the southern GF need verification with small aperture surveys.

  3. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  4. Estimation of respiratory rate from thermal videos of preterm infants.

    Science.gov (United States)

    Pereira, Carina Barbosa; Heimann, Konrad; Venema, Boudewijn; Blazek, Vladimir; Czaplik, Michael; Leonhardt, Steffen

    2017-07-01

    Studies have demonstrated that respiratory rate (RR) is a good predictor of the patient condition as well as an early marker of patient deterioration and physiological distress. However, it is also referred as "the neglected vital parameter". This is mainly due to shortcoming of current monitoring techniques. Moreover, in preterm infants, the removal of adhesive electrodes cause epidermal stripping, skin disruption, and with it pain. This paper proposes a new algorithm for estimation of RR in thermal videos of moderate preterm infants. It uses the temperature modulation around the nostrils over the respiratory cycle to extract this vital parameter. To compensate movement artifacts the approach incorporates a tracking algorithm. In addition, a new reliable and accurate algorithm for robust estimation of local (breath-to-breath) intervals was included. To evaluate the performance of this approach, thermal recordings of four moderate preterm infants were acquired. Results were compared with RR derived from body surface electrocardiography. The results showed an excellent agreement between thermal imaging and gold standard. On average, the relative error between both monitoring techniques was 3.42%. In summary, infrared thermography may be a clinically relevant alternative to conventional sensors, due to its high thermal resolution and outstanding characteristics.

  5. A punctual flux estimator and reactions rates optimization in neutral particles transport calculus by the Monte Carlo method

    International Nuclear Information System (INIS)

    Authier, N.

    1998-12-01

    One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidence level may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)

  6. Application of airborne gamma spectrometric survey data to estimating terrestrial gamma-ray dose rates: An example in California

    International Nuclear Information System (INIS)

    Wollenberg, H.A.; Revzan, K.L.; Smith, A.R.

    1992-01-01

    The authors examine the applicability of radioelement data from the National Aerial Radiometric Reconnaissance (NARR) to estimate terrestrial gamma-ray absorbed dose rates, by comparing dose rates calculated from aeroradiometric surveys of U, Th, and K concentrations in 1 x 2 degree quadrangles with dose rates calculated from a radiogeologic data base and the distribution of lithologies in California. Gamma-ray dose rates increase generally from north to south following lithological trends. Low values of 25--30 nG/h occur in the northernmost quadrangles where low-radioactivity basaltic and ultramafic rocks predominate. Dose rates then increase southward due to the preponderance of clastic sediments and basic volcanics of the Franciscan Formation and Sierran metamorphics in north central and central California, and to increasing exposure southward of the Sierra Nevada batholith, Tertiary marine sedimentary rocks, intermediate to acidic volcanics, and granitic rocks of the Coast Ranges. High values, to 100 nGy/h occur in southeastern California, due primarily to the presence of high-radioactivity Precambrian and pre Cenozoic metamorphic rocks. Lithologic-based estimates of mean dose rates in the quadrangles generally match those from aeroradiometric data, with statewide means of 63 and 60 nGy/h, respectively. These are intermediate between a population-weighted global average of 51 nGy/h and a weighted continental average of 70 nGy/h, based on the global distribution of rock types. The concurrence of lithologically- and aeroradiometrically- determined dose rates in California, with its varied geology and topography encompassing settings representative of the continents, indicates that the NARR data are applicable to estimates of terrestrial absorbed dose rates from natural gamma emitters

  7. Evaluation of the outcomes of endovascular management for patients with head and neck cancers and associated carotid blowout syndrome of the external carotid artery

    International Nuclear Information System (INIS)

    Chang, F.-C.; Luo, C.-B.; Lirng, J.-F.; Lin, C.-J.; Wu, H.-M.; Hung, S.-C.; Guo, W.-Y.; Teng, M.M.H.; Chang, C.-Y.

    2013-01-01

    Aim: To evaluate factors related to the technical and haemostatic outcomes of endovascular management in patients with head and neck cancers (HNC) associated with carotid blowout syndrome (CBS) of the external carotid artery (ECA). Materials and methods: Between 2002 and 2011, 34 patients with HNC with CBS involving branches of the ECA underwent endovascular therapy. Treatment included embolization with microparticles, microcoils, or acrylic adhesives. Fisher's exact test was used to examine demographic features, clinical and angiographic severities, and clinical and imaging findings as predictors of endovascular management outcomes. Results: Technical success and immediate haemostasis were achieved in all patients. Technical complications were encountered in one patient (2.9%). Rebleeding occurred in nine patients (26.5%). Angiographic vascular disruption grading from slight (1) to severe (4) revealed that the 18 patients with acute CBS had scores of 2 (2/18, 11.1%), 3 (3/18, 16.7%), and 4 (13/18, 72.2%). The 16 patients with impending and threatened CBS had scores of 1 (1/16, 6.25%), 2 (5/16, 31.25%), and 3 (10/16, 62.5%; p = 0.0003). For the 25 patients who underwent preprocedural computed tomography (CT)/magnetic resonance imaging (MRI) examinations within 3 months of treatment, the agreement between clinical and imaging findings reached the sensitivity, specificity, and kappa values for recurrent tumours (1, 0.7143, 0.7826), soft-tissue defect (0.9091, 0.3333, 0.2424), and sinus tract/fistula (0.4737, 0, 0.4286). Conclusion: Endovascular management for patients with CBS of the ECA had high technical success and safety but was associated with high rebleeding rates. We suggest applying aggressive post-procedural follow-up and using preprocedural CT/MRI to enhance the periprocedural diagnosis

  8. Dose rate estimates from irradiated light-water-reactor fuel assemblies in air

    International Nuclear Information System (INIS)

    Lloyd, W.R.; Sheaffer, M.K.; Sutcliffe, W.G.

    1994-01-01

    It is generally considered that irradiated spent fuel is so radioactive (self-protecting) that it can only be moved and processed with specialized equipment and facilities. However, a small, possibly subnational, group acting in secret with no concern for the environment (other than the reduction of signatures) and willing to incur substantial but not lethal radiation doses, could obtain plutonium by stealing and processing irradiated spent fuel that has cooled for several years. In this paper, we estimate the dose rate at various distances and directions from typical pressurized-water reactor (PWR) and boiling-water reactor (BWR) spent-fuel assemblies as a function of cooling time. Our results show that the dose rate is reduced rapidly for the first ten years after exposure in the reactor, and that it is reduced by a factor of ∼10 (from the one year dose rate) after 15 years. Even for fuel that has cooled for 15 years, a lethal dose (LD50) of 450 rem would be received at 1 m from the center of the fuel assembly after several minutes. However, moving from 1 to 5 m reduces the dose rate by over a factor of 10, and moving from 1 to 10 m reduces the dose rate by about a factor of 50. The dose rates 1 m from the top or bottom of the assembly are considerably less (about 10 and 22%, respectively) than 1 m from the center of the assembly, which is the direction of the maximum dose rate

  9. Inversely estimating the vertical profile of the soil CO2 production rate in a deciduous broadleaf forest using a particle filtering method.

    Science.gov (United States)

    Sakurai, Gen; Yonemura, Seiichiro; Kishimoto-Mo, Ayaka W; Murayama, Shohei; Ohtsuka, Toshiyuki; Yokozawa, Masayuki

    2015-01-01

    Carbon dioxide (CO2) efflux from the soil surface, which is a major source of CO2 from terrestrial ecosystems, represents the total CO2 production at all soil depths. Although many studies have estimated the vertical profile of the CO2 production rate, one of the difficulties in estimating the vertical profile is measuring diffusion coefficients of CO2 at all soil depths in a nondestructive manner. In this study, we estimated the temporal variation in the vertical profile of the CO2 production rate using a data assimilation method, the particle filtering method, in which the diffusion coefficients of CO2 were simultaneously estimated. The CO2 concentrations at several soil depths and CO2 efflux from the soil surface (only during the snow-free period) were measured at two points in a broadleaf forest in Japan, and the data were assimilated into a simple model including a diffusion equation. We found that there were large variations in the pattern of the vertical profile of the CO2 production rate between experiment sites: the peak CO2 production rate was at soil depths around 10 cm during the snow-free period at one site, but the peak was at the soil surface at the other site. Using this method to estimate the CO2 production rate during snow-cover periods allowed us to estimate CO2 efflux during that period as well. We estimated that the CO2 efflux during the snow-cover period (about half the year) accounted for around 13% of the annual CO2 efflux at this site. Although the method proposed in this study does not ensure the validity of the estimated diffusion coefficients and CO2 production rates, the method enables us to more closely approach the "actual" values by decreasing the variance of the posterior distribution of the values.

  10. National, regional, and worldwide estimates of stillbirth rates in 2009 with trends since 1995: a systematic analysis.

    Science.gov (United States)

    Cousens, Simon; Blencowe, Hannah; Stanton, Cynthia; Chou, Doris; Ahmed, Saifuddin; Steinhardt, Laura; Creanga, Andreea A; Tunçalp, Ozge; Balsara, Zohra Patel; Gupta, Shivam; Say, Lale; Lawn, Joy E

    2011-04-16

    Stillbirths do not count in routine worldwide data-collating systems or for the Millennium Development Goals. Two sets of national stillbirth estimates for 2000 produced similar worldwide totals of 3·2 million and 3·3 million, but rates differed substantially for some countries. We aimed to develop more reliable estimates and a time series from 1995 for 193 countries, by increasing input data, using recent data, and applying improved modelling approaches. For international comparison, stillbirth is defined as fetal death in the third trimester (≥1000 g birthweight or ≥28 completed weeks of gestation). Several sources of stillbirth data were identified and assessed against prespecified inclusion criteria: vital registration data; nationally representative surveys; and published studies identified through systematic literature searches, unpublished studies, and national data identified through a WHO country consultation process. For 2009, reported rates were used for 33 countries and model-based estimates for 160 countries. A regression model of log stillbirth rate was developed and used to predict national stillbirth rates from 1995 to 2009. Uncertainty ranges were obtained with a bootstrap approach. The final model included log(neonatal mortality rate) (cubic spline), log(low birthweight rate) (cubic spline), log(gross national income purchasing power parity) (cubic spline), region, type of data source, and definition of stillbirth. Vital registration data from 79 countries, 69 nationally representative surveys from 39 countries, and 113 studies from 42 countries met inclusion criteria. The estimated number of global stillbirths was 2·64 million (uncertainty range 2·14 million to 3·82 million) in 2009 compared with 3·03 million (uncertainty range 2·37 million to 4·19 million) in 1995. Worldwide stillbirth rate has declined by 14·5%, from 22·1 stillbirths per 1000 births in 1995 to 18·9 stillbirths per 1000 births in 2009. In 2009, 76·2% of

  11. New Software for the Fast Estimation of Population Recombination Rates (FastEPRR in the Genomic Era

    Directory of Open Access Journals (Sweden)

    Feng Gao

    2016-06-01

    Full Text Available Genetic recombination is a very important evolutionary mechanism that mixes parental haplotypes and produces new raw material for organismal evolution. As a result, information on recombination rates is critical for biological research. In this paper, we introduce a new extremely fast open-source software package (FastEPRR that uses machine learning to estimate recombination rate ρ (=4Ner from intraspecific DNA polymorphism data. When ρ>10 and the number of sampled diploid individuals is large enough (≥50, the variance of ρFastEPRR remains slightly smaller than that of ρLDhat. The new estimate ρcomb (calculated by averaging ρFastEPRR and ρLDhat has the smallest variance of all cases. When estimating ρFastEPRR, the finite-site model was employed to analyze cases with a high rate of recurrent mutations, and an additional method is proposed to consider the effect of variable recombination rates within windows. Simulations encompassing a wide range of parameters demonstrate that different evolutionary factors, such as demography and selection, may not increase the false positive rate of recombination hotspots. Overall, accuracy of FastEPRR is similar to the well-known method, LDhat, but requires far less computation time. Genetic maps for each human population (YRI, CEU, and CHB extracted from the 1000 Genomes OMNI data set were obtained in less than 3 d using just a single CPU core. The Pearson Pairwise correlation coefficient between the ρFastEPRR and ρLDhat maps is very high, ranging between 0.929 and 0.987 at a 5-Mb scale. Considering that sample sizes for these kinds of data are increasing dramatically with advances in next-generation sequencing technologies, FastEPRR (freely available at http://www.picb.ac.cn/evolgen/ is expected to become a widely used tool for establishing genetic maps and studying recombination hotspots in the population genomic era.

  12. Estimating long-run equilibrium real exchange rates: short-lived shocks with long-lived impacts on Pakistan.

    Science.gov (United States)

    Zardad, Asma; Mohsin, Asma; Zaman, Khalid

    2013-12-01

    The purpose of this study is to investigate the factors that affect real exchange rate volatility for Pakistan through the co-integration and error correction model over a 30-year time period, i.e. between 1980 and 2010. The study employed the autoregressive conditional heteroskedasticity (ARCH), generalized autoregressive conditional heteroskedasticity (GARCH) and Vector Error Correction model (VECM) to estimate the changes in the volatility of real exchange rate series, while an error correction model was used to determine the short-run dynamics of the system. The study is limited to a few variables i.e., productivity differential (i.e., real GDP per capita relative to main trading partner); terms of trade; trade openness and government expenditures in order to manage robust data. The result indicates that real effective exchange rate (REER) has been volatile around its equilibrium level; while, the speed of adjustment is relatively slow. VECM results confirm long run convergence of real exchange rate towards its equilibrium level. Results from ARCH and GARCH estimation shows that real shocks volatility persists, so that shocks die out rather slowly, and lasting misalignment seems to have occurred.

  13. An expert system for estimating production rates and costs for hardwood group-selection harvests

    Science.gov (United States)

    Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba

    2003-01-01

    As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...

  14. Uncertainty estimation with bias-correction for flow series based on rating curve

    Science.gov (United States)

    Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta

    2014-03-01

    Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.

  15. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    Science.gov (United States)

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  16. Divergence times in Caenorhabditis and Drosophila inferred from direct estimates of the neutral mutation rate.

    Science.gov (United States)

    Cutter, Asher D

    2008-04-01

    Accurate inference of the dates of common ancestry among species forms a central problem in understanding the evolutionary history of organisms. Molecular estimates of divergence time rely on the molecular evolutionary prediction that neutral mutations and substitutions occur at the same constant rate in genomes of related species. This underlies the notion of a molecular clock. Most implementations of this idea depend on paleontological calibration to infer dates of common ancestry, but taxa with poor fossil records must rely on external, potentially inappropriate, calibration with distantly related species. The classic biological models Caenorhabditis and Drosophila are examples of such problem taxa. Here, I illustrate internal calibration in these groups with direct estimates of the mutation rate from contemporary populations that are corrected for interfering effects of selection on the assumption of neutrality of substitutions. Divergence times are inferred among 6 species each of Caenorhabditis and Drosophila, based on thousands of orthologous groups of genes. I propose that the 2 closest known species of Caenorhabditis shared a common ancestor <24 MYA (Caenorhabditis briggsae and Caenorhabditis sp. 5) and that Caenorhabditis elegans diverged from its closest known relatives <30 MYA, assuming that these species pass through at least 6 generations per year; these estimates are much more recent than reported previously with molecular clock calibrations from non-nematode phyla. Dates inferred for the common ancestor of Drosophila melanogaster and Drosophila simulans are roughly concordant with previous studies. These revised dates have important implications for rates of genome evolution and the origin of self-fertilization in Caenorhabditis.

  17. Using (137)Cs measurements to estimate soil erosion rates in the Pčinja and South Morava River Basins, southeastern Serbia.

    Science.gov (United States)

    Petrović, Jelena; Dragović, Snežana; Dragović, Ranko; Đorđević, Milan; Đokić, Mrđan; Zlatković, Bojan; Walling, Desmond

    2016-07-01

    The need for reliable assessments of soil erosion rates in Serbia has directed attention to the potential for using (137)Cs measurements to derive estimates of soil redistribution rates. Since, to date, this approach has not been applied in southeastern Serbia, a reconnaissance study was undertaken to confirm its viability. The need to take account of the occurrence of substantial Chernobyl fallout was seen as a potential problem. Samples for (137)Cs measurement were collected from a zone of uncultivated soils in the watersheds of Pčinja and South Morava Rivers, an area with known high soil erosion rates. Two theoretical conversion models, the profile distribution (PD) model and diffusion and migration (D&M) model were used to derive estimates of soil erosion and deposition rates from the (137)Cs measurements. The estimates of soil redistribution rates derived by using the PD and D&M models were found to differ substantially and this difference was ascribed to the assumptions of the simpler PD model that cause it to overestimate rates of soil loss. The results provided by the D&M model were judged to more reliable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Monitoring renal function in children with Fabry disease: comparisons of measured and creatinine-based estimated glomerular filtration rate

    NARCIS (Netherlands)

    Tøndel, Camilla; Ramaswami, Uma; Aakre, Kristin Moberg; Wijburg, Frits; Bouwman, Machtelt; Svarstad, Einar

    2010-01-01

    Studies on renal function in children with Fabry disease have mainly been done using estimated creatinine-based glomerular filtration rate (GFR). The aim of this study was to compare estimated creatinine-based GFR (eGFR) with measured GFR (mGFR) in children with Fabry disease and normal renal

  19. Bayesian estimation of magma supply, storage, and eruption rates using a multiphysical volcano model: Kīlauea Volcano, 2000-2012

    Science.gov (United States)

    Anderson, Kyle R.; Poland, Michael P.

    2016-08-01

    Estimating rates of magma supply to the world's volcanoes remains one of the most fundamental aims of volcanology. Yet, supply rates can be difficult to estimate even at well-monitored volcanoes, in part because observations are noisy and are usually considered independently rather than as part of a holistic system. In this work we demonstrate a technique for probabilistically estimating time-variable rates of magma supply to a volcano through probabilistic constraint on storage and eruption rates. This approach utilizes Bayesian joint inversion of diverse datasets using predictions from a multiphysical volcano model, and independent prior information derived from previous geophysical, geochemical, and geological studies. The solution to the inverse problem takes the form of a probability density function which takes into account uncertainties in observations and prior information, and which we sample using a Markov chain Monte Carlo algorithm. Applying the technique to Kīlauea Volcano, we develop a model which relates magma flow rates with deformation of the volcano's surface, sulfur dioxide emission rates, lava flow field volumes, and composition of the volcano's basaltic magma. This model accounts for effects and processes mostly neglected in previous supply rate estimates at Kīlauea, including magma compressibility, loss of sulfur to the hydrothermal system, and potential magma storage in the volcano's deep rift zones. We jointly invert data and prior information to estimate rates of supply, storage, and eruption during three recent quasi-steady-state periods at the volcano. Results shed new light on the time-variability of magma supply to Kīlauea, which we find to have increased by 35-100% between 2001 and 2006 (from 0.11-0.17 to 0.18-0.28 km3/yr), before subsequently decreasing to 0.08-0.12 km3/yr by 2012. Changes in supply rate directly impact hazard at the volcano, and were largely responsible for an increase in eruption rate of 60-150% between 2001 and

  20. A Systematic Review and Meta-Analysis Estimating the Expected Dropout Rates in Randomized Controlled Trials on Yoga Interventions.

    Science.gov (United States)

    Cramer, Holger; Haller, Heidemarie; Dobos, Gustav; Lauche, Romy

    2016-01-01

    A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs). Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73%) in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p = 0.03). For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%). The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants' sociodemographic and health condition.

  1. Uncertainty in action-value estimation affects both action choice and learning rate of the choice behaviors of rats.

    Science.gov (United States)

    Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu

    2012-04-01

    The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20-549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  2. Trends in pregnancies and pregnancy rates by outcome: estimates for the United States, 1976-96.

    Science.gov (United States)

    Ventura, S J; Mosher, W D; Curtin, S C; Abma, J C; Henshaw, S

    2000-01-01

    This report presents national estimates of pregnancies and pregnancy rates according to women's age, race, and Hispanic origin, and by marital status, race, and Hispanic origin. Data are presented for 1976-96. Data from the National Survey of Family Growth (NSFG) are used to show information on sexual activity, contraceptive practices, and infertility, as well as women's reports of pregnancy intentions. Tables of pregnancy rates and the factors affecting pregnancy rates are presented and interpreted. Birth data are from the birth-registration system for all births registered in the United States and reported by State health departments to NCHS; abortion data are from The Alan Guttmacher Institute (AGI) and the National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention (CDC); and fetal loss data are from pregnancy history information collected in the NSFG. In 1996 an estimated 6.24 million pregnancies resulted in 3.89 million live births, 1.37 million induced abortions, and 0.98 million fetal losses. The pregnancy rate in 1996 was 104.7 pregnancies per 1,000 women aged 15-44 years, 9 percent lower than in 1990 (115.6), and the lowest recorded since 1976 (102.7). Since 1990 rates have dropped 8 percent for live births, 16 percent for induced abortions, and 4 percent for fetal losses. The teenage pregnancy rate has declined considerably in the 1990's, falling 15 percent from its 1991 high of 116.5 per 1,000 women aged 15-19 years to 98.7 in 1996. Among the factors accounting for this decline are decreased sexual activity, increases in condom use, and the adoption of the injectable and implant contraceptives.

  3. An Online Calculator to Estimate the Impact of Changes in Breastfeeding Rates on Population Health and Costs.

    Science.gov (United States)

    Stuebe, Alison M; Jegier, Briana J; Schwarz, Eleanor Bimla; Green, Brittany D; Reinhold, Arnold G; Colaizy, Tarah T; Bogen, Debra L; Schaefer, Andrew J; Jegier, Jamus T; Green, Noah S; Bartick, Melissa C

    2017-12-01

    We sought to determine the impact of changes in breastfeeding rates on population health. We used a Monte Carlo simulation model to estimate the population-level changes in disease burden associated with marginal changes in rates of any breastfeeding at each month from birth to 12 months of life, and in rates of exclusive breastfeeding from birth to 6 months of life. We used these marginal estimates to construct an interactive online calculator (available at www.usbreastfeeding.org/saving-calc ). The Institutional Review Board of the Cambridge Health Alliance exempted the study. Using our interactive online calculator, we found that a 5% point increase in breastfeeding rates was associated with statistically significant differences in child infectious morbidity for the U.S. population, including otitis media (101,952 cases, 95% confidence interval [CI] 77,929-131,894 cases) and gastrointestinal infection (236,073 cases, 95% CI 190,643-290,278 cases). Associated medical cost differences were $31,784,763 (95% CI $24,295,235-$41,119,548) for otitis media and $12,588,848 ($10,166,203-$15,479,352) for gastrointestinal infection. The state-level impact of attaining Healthy People 2020 goals varied by population size and current breastfeeding rates. Modest increases in breastfeeding rates substantially impact healthcare costs in the first year of life.

  4. Using Pulse Rate in Estimating Workload Evaluating a Load Mobilizing Activity

    Directory of Open Access Journals (Sweden)

    Juan Alberto Castillo

    2014-06-01

    Full Text Available Introduction: The pulse rate is a direct indicator of the state of the cardiovascular system, in ad-dition to being an indirect indicator of the energy expended in performing a task. The pulse of a person is the number of pulses recorded in a peripheral artery per unit time; the pulse appears as a pressure wave moving along the blood vessels, which are flexible, “in large arterial branches, speed of 7-10 m/s in the small arteries, 15 to 35 m/s”. Materials and methods: The aim of this study was to assess heart rate, using the technique of recording the frequency of the pulse, oxy-gen consumption and observation of work activity in the estimation of the workload in a load handling task for three situations: lift/transfer/deposit; before, during and after the task the pulse rate is recorded for 24 young volunteers (10 women and 14 men under laboratory conditions. We performed a gesture analysis of work activity and lifting and handling strategies. Results: We observed an increase between initial and final fp in both groups and for the two tasks, a dif¬ference is also recorded in the increase in heart rate of 17.5 for charging 75 % of the participants experienced an increase in fp above 100 lat./min. Par 25 kg, registered values indicate greater than 114 lat./min and 17.5 kg than 128 lat./min values. Discussion: The pulse rate method is recommended for its simplicity of use for operational staff, supervisors and managers and indus¬trial engineers not trained in the physiology method can also be used by industrial hygienists.

  5. Propargyl Recombination: Estimation of the High Temperature, Low Pressure Rate Constant from Flame Measurements

    DEFF Research Database (Denmark)

    Rasmussen, Christian Lund; Skjøth-Rasmussen, Martin Skov; Jensen, Anker

    2005-01-01

    The most important cyclization reaction in hydrocarbon flames is probably recombination of propargyl radicals. This reaction may, depending on reaction conditions, form benzene, phenyl or fulvene, as well as a range of linear products. A number of rate measurements have been reported for C3H3 + C3H......3 at temperatures below 1000 K, while data at high temperature and low pressure only can be obtained from flames. In the present work, an estimate of the rate constant for the reaction at 1400 +/- 50 K and 20 Torr is obtained from analysis of the fuel-rich acetylene flame of Westmoreland, Howard...

  6. Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?*

    Science.gov (United States)

    Kos, Maciej; Li, Xuan; Khaghani-Far, Iman; Gordon, Christine M.; Pavel, Misha; Jimison Member, Holly B.

    2018-01-01

    A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data. PMID:29060185

  7. Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?

    Science.gov (United States)

    Kos, Maciej; Xuan Li; Khaghani-Far, Iman; Gordon, Christine M; Pavel, Misha; Jimison, Holly B

    2017-07-01

    A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data.

  8. Intercomparison of techniques for estimation of sedimentation rate in the Sabah and Sarawak coastal waters

    International Nuclear Information System (INIS)

    Zal U'yun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abd Rahim Mohamed

    2011-01-01

    A total of eight sediment cores with 50 cm length were taken in the Sabah and Sarawak coastal waters using a gravity corer in 2004 to estimate sedimentation rates using four mathematical models of CIC, Shukla-CIC, CRS and ADE. The average of sedimentation rate ranged from 0.24 to 0.48 cm year -1 , which is calculated based on the vertical profile of 210 Pbex in sediment core. The finding also showed that the sedimentation rates derived from four models were generally shown in good agreement with similar or comparable value at some stations. However, based on statistical analysis of paired sample t-test indicated that CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the coastal area. (author)

  9. Combined methodology for estimating dose rates and health effects from exposure to radioactive pollutants

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Leggett, R.W.; Yalcintas, M.G.

    1980-12-01

    The work described in the report is basically a synthesis of two previously existing computer codes: INREM II, developed at the Oak Ridge National Laboratory (ORNL); and CAIRD, developed by the Environmental Protection Agency (EPA). The INREM II code uses contemporary dosimetric methods to estimate doses to specified reference organs due to inhalation or ingestion of a radionuclide. The CAIRD code employs actuarial life tables to account for competing risks in estimating numbers of health effects resulting from exposure of a cohort to some incremental risk. The combined computer code, referred to as RADRISK, estimates numbers of health effects in a hypothetical cohort of 100,000 persons due to continuous lifetime inhalation or ingestion of a radionuclide. Also briefly discussed in this report is a method of estimating numbers of health effects in a hypothetical cohort due to continuous lifetime exposure to external radiation. This method employs the CAIRD methodology together with dose conversion factors generated by the computer code DOSFACTER, developed at ORNL; these dose conversion factors are used to estimate dose rates to persons due to radionuclides in the air or on the ground surface. The combination of the life table and dosimetric guidelines for the release of radioactive pollutants to the atmosphere, as required by the Clean Air Act Amendments of 1977.

  10. Validating accelerometry estimates of energy expenditure across behaviours using heart rate data in a free-living seabird.

    Science.gov (United States)

    Hicks, Olivia; Burthe, Sarah; Daunt, Francis; Butler, Adam; Bishop, Charles; Green, Jonathan A

    2017-05-15

    Two main techniques have dominated the field of ecological energetics: the heart rate and doubly labelled water methods. Although well established, they are not without their weaknesses, namely expense, intrusiveness and lack of temporal resolution. A new technique has been developed using accelerometers; it uses the overall dynamic body acceleration (ODBA) of an animal as a calibrated proxy for energy expenditure. This method provides high-resolution data without the need for surgery. Significant relationships exist between the rate of oxygen consumption ( V̇ O 2 ) and ODBA in controlled conditions across a number of taxa; however, it is not known whether ODBA represents a robust proxy for energy expenditure consistently in all natural behaviours and there have been specific questions over its validity during diving, in diving endotherms. Here, we simultaneously deployed accelerometers and heart rate loggers in a wild population of European shags ( Phalacrocorax aristotelis ). Existing calibration relationships were then used to make behaviour-specific estimates of energy expenditure for each of these two techniques. Compared with heart rate-derived estimates, the ODBA method predicts energy expenditure well during flight and diving behaviour, but overestimates the cost of resting behaviour. We then combined these two datasets to generate a new calibration relationship between ODBA and V̇ O 2  that accounts for this by being informed by heart rate-derived estimates. Across behaviours we found a good relationship between ODBA and V̇ O 2 Within individual behaviours, we found useable relationships between ODBA and V̇ O 2  for flight and resting, and a poor relationship during diving. The error associated with these new calibration relationships mostly originates from the previous heart rate calibration rather than the error associated with the ODBA method. The equations provide tools for understanding how energy constrains ecology across the complex behaviour

  11. Estimated glomerular filtration rate, chronic kidney disease and antiretroviral drug use in HIV-positive patients

    DEFF Research Database (Denmark)

    Mocroft, Amanda; Kirk, Ole; Reiss, Peter

    2010-01-01

    with at least three serum creatinine measurements and corresponding body weight measurements from 2004 onwards. METHODS:: CKD was defined as either confirmed (two measurements >/=3 months apart) estimated glomerular filtration rate (eGFR) of 60 ml/min per 1.73 m or below for persons with baseline eGFR of above...... cumulative exposure to tenofovir [incidence rate ratio (IRR) per year 1.16, 95% CI 1.06-1.25, P ... increased rate of CKD. Consistent results were observed in wide-ranging sensitivity analyses, although of marginal statistical significance for lopinavir/r. No other antiretroviral dugs were associated with increased incidence of CKD. CONCLUSION:: In this nonrandomized large cohort, increasing exposure...

  12. Dose rate estimates and spatial interpolation maps of outdoor gamma dose rate with geostatistical methods; A case study from Artvin, Turkey

    International Nuclear Information System (INIS)

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur

    2015-01-01

    In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.

  13. Estimation of glucose rate of appearance from cgs and subcutaneous insulin delivery in type 1 diabetes

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Al-Matouq, Ali Ahmed

    2017-01-01

    Method and System for providing estimates of Glucose Rate of Appearance from the intestine (GRA) using continuous glucose sensor measurements (CGS) taken from the subcutaneous of a diabetes patient and the amount of insulin administered

  14. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the pa...... statistics and a control scheme. The algorithm also works well for scene change condition. Test results for coding interlaced video (720x576 PAL) are reported.......A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  15. Kinetics analysis for development of a rate constant estimation model for ultrasonic degradation reaction of methylene blue.

    Science.gov (United States)

    Kobayashi, Daisuke; Honma, Chiemi; Matsumoto, Hideyuki; Takahashi, Tomoki; Kuroda, Chiaki; Otake, Katsuto; Shono, Atsushi

    2014-07-01

    Ultrasound has been used as an advanced oxidation method for wastewater treatment. Sonochemical degradation of organic compounds in aqueous solution occurs by pyrolysis and/or reaction with hydroxyl radicals. Moreover, kinetics of sonochemical degradation has been proposed. However, the effect of ultrasonic frequency on degradation rate has not been investigated. In our previous study, a simple model for estimating the apparent degradation rate of methylene blue was proposed. In this study, sonochemical degradation of methylene blue was performed at various frequencies. Apparent degradation rate constant was evaluated assuming that sonochemical degradation of methylene blue was a first-order reaction. Specifically, we focused on effects of ultrasonic frequency and power on rate constant, and the applicability of our proposed model was demonstrated. Using this approach, maximum sonochemical degradation rate was observed at 490 kHz, which agrees with a previous investigation into the effect of frequency on the sonochemical efficiency value evaluated by KI oxidation dosimetry. Degradation rate increased with ultrasonic power at every frequency. It was also observed that threshold power must be reached for the degradation reaction to progress. The initial methylene blue concentration and the apparent degradation rate constant have a relation of an inverse proportion. Our proposed model for estimating the apparent degradation rate constant using ultrasonic power and sonochemical efficiency value can apply to this study which extended the frequency and initial concentration range. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Basic study on relationship between estimated rate constants and noise in FDG kinetic analysis

    International Nuclear Information System (INIS)

    Kimura, Yuichi; Toyama, Hinako; Senda, Michio.

    1996-01-01

    For accurate estimation of the rate constants in 18 F-FDG dynamic study, the shape of the estimation function (Φ) is crucial. In this investigation, the relationship between the noise level in tissue time activity curve and the shape of the least squared estimation function which is the sum of squared error between a function of model parameters and a measured data is calculated in 3 parameter model of 18 F-FDG. In the first simulation, by using actual plasma time activity curve, the true tissue curve was generated from known sets of rate constants ranging 0.05≤k 1 ≤0.15, 0.1≤k 2 ≤0.2 and 0.01≤k 3 ≤0.1 in 0.01 step. This procedure was repeated under various noise levels in the tissue time activity curve from 1 to 8% of the maximum value in the tissue activity. In the second simulation, plasma and tissue time activity curves from clinical 18 F-FDG dynamic study were used to calculate the Φ. In the noise-free case, because the global minima is separated from neighboring local minimums, it was easy to find out the optimum point. However, with increasing noise level, the optimum point was buried in many neighboring local minima. Making it difficult to find out the optimum point. The optimum point was found within 20% of the convergence point by standard non-linear optimization method. The shape of Φ for the clinical data was similar to that with the noise level of 3 or 5% in the first simulation. Therefore direct search within the area extending 20% from the result of usual non-linear curve fitting procedure is recommended for accurate estimation of the constants. (author)

  17. Modelling and hardware-in-the-loop simulation of the blowout tract components for passenger compartment air conditioning of motor vehicles; Modellierung und Hardware-in-the-Loop-Simulation der Komponenten des Ausblastraktes zur Kraftfahrzeuginnenraumklimatisierung

    Energy Technology Data Exchange (ETDEWEB)

    Michalek, David

    2009-07-01

    The author investigated the modelling and hardware-in-the-loop simulation of components of the blowout tract of motor car air conditioning systems. The control systems and air conditioning systems are gone into, from the air entering the car to the control systems and sensors for monitoring state variables. The function of the control equipment hardware and software was to be analyzed reproducibly in order to save time and cost. The models were verified using available data. Validation criteria were established for the hardware-in-the-loop simulator. On the basis of selected operating conditions, the performance of the air conditioning control unit inside the vehicle was compared with the simulation results and was evaluated on the basis of the established criteria. (orig.)

  18. A Systematic Review and Meta-Analysis Estimating the Expected Dropout Rates in Randomized Controlled Trials on Yoga Interventions

    Directory of Open Access Journals (Sweden)

    Holger Cramer

    2016-01-01

    Full Text Available A reasonable estimation of expected dropout rates is vital for adequate sample size calculations in randomized controlled trials (RCTs. Underestimating expected dropouts rates increases the risk of false negative results while overestimating rates results in overly large sample sizes, raising both ethical and economic issues. To estimate expected dropout rates in RCTs on yoga interventions, MEDLINE/PubMed, Scopus, IndMED, and the Cochrane Library were searched through February 2014; a total of 168 RCTs were meta-analyzed. Overall dropout rate was 11.42% (95% confidence interval [CI] = 10.11%, 12.73% in the yoga groups; rates were comparable in usual care and psychological control groups and were slightly higher in exercise control groups (rate = 14.53%; 95% CI = 11.56%, 17.50%; odds ratio = 0.82; 95% CI = 0.68, 0.98; p=0.03. For RCTs with durations above 12 weeks, dropout rates in yoga groups increased to 15.23% (95% CI = 11.79%, 18.68%. The upper border of 95% CIs for dropout rates commonly was below 20% regardless of study origin, health condition, gender, age groups, and intervention characteristics; however, it exceeded 40% for studies on HIV patients or heterogeneous age groups. In conclusion, dropout rates can be expected to be less than 15 to 20% for most RCTs on yoga interventions. Yet dropout rates beyond 40% are possible depending on the participants’ sociodemographic and health condition.

  19. [Estimating glomerular filtration rate in 2012: which adding value for the CKD-EPI equation?].

    Science.gov (United States)

    Delanaye, Pierre; Mariat, Christophe; Moranne, Olivier; Cavalier, Etienne; Flamant, Martin

    2012-07-01

    Measuring or estimating glomerular filtration rate (GFR) is still considered as the best way to apprehend global renal function. In 2009, the new Chronic Kidney Disease Epidemiology (CKD-EPI) equation has been proposed as a better estimator of GFR than the Modification of Diet in Renal Disease (MDRD) study equation. This new equation is supposed to underestimate GFR to a lesser degree in higher GFR levels. In this review, we will present and deeply discuss the performances of this equation. Based on articles published between 2009 and 2012, this review will underline advantages, notably the better knowledge of chronic kidney disease prevalence, but also limitations of this new equation, especially in some specific populations. We eventually insist on the fact that all these equations are estimations and nephrologists should remain cautious in their interpretation. Copyright © 2012 Association Société de néphrologie. Published by Elsevier SAS. All rights reserved.

  20. A novel recursive Fourier transform for nonuniform sampled signals: application to heart rate variability spectrum estimation.

    Science.gov (United States)

    Holland, Alexander; Aboy, Mateo

    2009-07-01

    We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.