Energy Technology Data Exchange (ETDEWEB)
Nilsen, Thomas
2007-01-15
Risk assessment in relation to possible blowouts of oil and condensate is the main topic in this analysis. The estimated risk is evaluated against criteria for acceptable risk and is part of the foundation for dimensioning in oil spill preparedness. The report aims to contribute to the standardisation of the terminology, methodology and documentation for estimations of blowout rates, and thus to simplify communication of the analysis results and strengthen the decision makers' trust in these (ml)
Well blowout rates in California Oil and Gas District 4--Update and Trends
Energy Technology Data Exchange (ETDEWEB)
Jordan, Preston D.; Benson, Sally M.
2009-10-01
Well blowouts are one type of event in hydrocarbon exploration and production that generates health, safety, environmental and financial risk. Well blowouts are variously defined as 'uncontrolled flow of well fluids and/or formation fluids from the wellbore' or 'uncontrolled flow of reservoir fluids into the wellbore'. Theoretically this is irrespective of flux rate and so would include low fluxes, often termed 'leakage'. In practice, such low-flux events are not considered well blowouts. Rather, the term well blowout applies to higher fluxes that rise to attention more acutely, typically in the order of seconds to days after the event commences. It is not unusual for insurance claims for well blowouts to exceed US$10 million. This does not imply that all blowouts are this costly, as it is likely claims are filed only for the most catastrophic events. Still, insuring against the risk of loss of well control is the costliest in the industry. The risk of well blowouts was recently quantified from an assembled database of 102 events occurring in California Oil and Gas District 4 during the period 1991 to 2005, inclusive. This article reviews those findings, updates them to a certain extent and compares them with other well blowout risk study results. It also provides an improved perspective on some of the findings. In short, this update finds that blowout rates have remained constant from 2005 to 2008 within the limits of resolution and that the decline in blowout rates from 1991 to 2005 was likely due to improved industry practice.
Availability analysis of subsea blowout preventer using Markov model considering demand rate
Directory of Open Access Journals (Sweden)
Sunghee Kim
2014-12-01
Full Text Available Availabilities of subsea Blowout Preventers (BOP in the Gulf of Mexico Outer Continental Shelf (GoM OCS is investigated using a Markov method. An updated β factor model by SINTEF is used for common-cause failures in multiple redundant systems. Coefficient values of failure rates for the Markov model are derived using the β factor model of the PDS (reliability of computer-based safety systems, Norwegian acronym method. The blind shear ram preventer system of the subsea BOP components considers a demand rate to reflect reality more. Markov models considering the demand rate for one or two components are introduced. Two data sets are compared at the GoM OCS. The results show that three or four pipe ram preventers give similar availabilities, but redundant blind shear ram preventers or annular preventers enhance the availability of the subsea BOP. Also control systems (PODs and connectors are contributable components to improve the availability of the subsea BOPs based on sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Jordan, Preston; Jordan, Preston D.; Benson, Sally M.
2008-05-15
Well blowout rates in oil fields undergoing thermally enhanced recovery (via steam injection) in California Oil and Gas District 4 from 1991 to 2005 were on the order of 1 per 1,000 well construction operations, 1 per 10,000 active wells per year, and 1 per 100,000 shut-in/idle and plugged/abandoned wells per year. This allows some initial inferences about leakage of CO2 via wells, which is considered perhaps the greatest leakage risk for geological storage of CO2. During the study period, 9% of the oil produced in the United States was from District 4, and 59% of this production was via thermally enhanced recovery. There was only one possible blowout from an unknown or poorly located well, despite over a century of well drilling and production activities in the district. The blowout rate declined dramatically during the study period, most likely as a result of increasing experience, improved technology, and/or changes in safety culture. If so, this decline indicates the blowout rate in CO2-storage fields can be significantly minimized both initially and with increasing experience over time. Comparable studies should be conducted in other areas. These studies would be particularly valuable in regions with CO2-enhanced oil recovery (EOR) and natural gas storage.
Risk assessment for SAGD well blowouts
Energy Technology Data Exchange (ETDEWEB)
Worth, D.; Alhanati, F.; Lastiwka, M. [C-FER Technologies, Edmonton, AB (Canada); Crepin, S. [Petrocedeno, Caracas (Venezuela)
2008-10-15
This paper discussed a steam assisted gravity drainage (SAGD) pilot project currently being conducted in Venezuela's Orinoco Belt. A risk assessment was conducted as part of the pilot program in order to evaluate the use of single barrier completions in conjunction with a blowout response plan. The study considered 3 options: (1) an isolated double barrier completion with a downhole safety valve (DHSV) in the production tubing string and a packer in the production casing annulus; (2) a partially isolated completion with no DHSV and a packer in the production casing annulus; and (3) an open single barrier completion with no additional downhole barriers. A reservoir model was used to assess the blowout flowing potential of SAGD well pairs. The probability of a blowout was estimated using fault tree analysis techniques. Risk was determined for various blowout scenarios, including blowouts during normal and workover operations, as well as blowouts through various flow paths. Total risk for each completion scenario was also determined at 3 different time periods within the production life of the well pair. The possible consequences of a blowout were assessed using quantitative consequence models. Results of the study showed that environmental and economic risks were much higher for the open completion technique. Higher risks were also associated with the earlier life of the completion strings. 20 refs., 3 tabs., 19 figs.
Offshore Blowouts, Causes and Trends
Energy Technology Data Exchange (ETDEWEB)
Holand, P
1996-02-01
The main objective of this doctoral thesis was to establish an improved design basis for offshore installations with respect to blowout risk analyses. The following sub objectives are defined: (1) Establish an offshore blowout database suitable for risk analyses, (2) Compare the blowout risk related to loss of lives with the total offshore risk and risk in other industries, (3) Analyse blowouts with respect to parameters that are important to describe and quantify blowout risk that has been experienced to be able to answer several questions such as under what operations have blowouts occurred, direct causes, frequency of occurrence etc., (4) Analyse blowouts with respect to trends. The research strategy applied includes elements from both survey strategy and case study strategy. The data are systematized in the form of a new database developed from the MARINTEK database. Most blowouts in the analysed period occurred during drilling operations. Shallow gas blowouts were more frequent than deep blowouts and workover blowouts occurred more often than deep development drilling blowouts. Relatively few blowouts occurred during completion, wireline and normal production activities. No significant trend in blowout occurrences as a function of time could be observed, except for completion blowouts that showed a significantly decreasing trend. But there were trends regarding some important parameters for risk analyses, e.g. the ignition probability has decreased and diverter systems have improved. Only 3.5% of the fatalities occurred because of blowouts. 106 refs., 51 figs., 55 tabs.
Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research
2011-01-01
The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...
Directory of Open Access Journals (Sweden)
Laurence Booth
2015-04-01
Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.
Review of flow rate estimates of the Deepwater Horizon oil spill
McNutt, Marcia K.; Camilli, Rich; Crone, Timothy J.; Guthrie, George D.; Hsieh, Paul A.; Ryerson, Thomas B.; Savas, Omer; Shaffer, Frank
2011-01-01
The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ ...
Computed tomograms of blowout fracture
International Nuclear Information System (INIS)
Ito, Haruhide; Hayashi, Minoru; Shoin, Katsuo; Hwang, Wen-Zern; Yamamoto, Shinjiro; Yonemura, Taizo.
1985-01-01
We studied 18 cases of orbital fractures, excluding optic canal fracture. There were 11 cases of pure blowout fracture and 3 of the impure type. The other 4 cases were orbital fractures without blowout fracture. The cardinal syndromes were diplopia, enophthalmos, and sensory disturbances of the trigeminal nerve in the pure type of blowout fracture. Many cases of the impure type of blowout fracture or of orbital fracture showed black eyes or a swelling of the eyelids which masked enophthalmos. Axial and coronal CT scans demonstrated: 1) the orbital fracture, 2) the degree of enophthalmos, 3) intraorbital soft tissue, such as incarcerated or prolapsed ocular muscles, 4) intraorbital hemorrhage, 5) the anatomical relation of the orbital fracture to the lacrimal canal, the trochlea, and the trigeminal nerve, and 6) the lesions of the paranasal sinus and the intracranial cavity. CT scans play an important role in determining what surgical procedures might best be employed. Pure blowout fractures were classified by CT scans into these four types: 1) incarcerating linear fracture, 2) trapdoor fracture, 3) punched-out fracture, and 4) broad fracture. Cases with severe head injury should be examined to see whether or not blowout fracture is present. If the patients are to hope to return to society, a blowout fracture should be treated as soon as possible. (author)
Computed tomograms of blowout fracture
Energy Technology Data Exchange (ETDEWEB)
Ito, Haruhide; Hayashi, Minoru; Shoin, Katsuo; Hwang, Wen-Zern; Yamamoto, Shinjiro; Yonemura, Taizo
1985-02-01
We studied 18 cases of orbital fractures, excluding optic canal fracture. There were 11 cases of pure blowout fracture and 3 of the impure type. The other 4 cases were orbital fractures without blowout fracture. The cardinal syndromes were diplopia, enophthalmos, and sensory disturbances of the trigeminal nerve in the pure type of blowout fracture. Many cases of the impure type of blowout fracture or of orbital fracture showed black eyes or a swelling of the eyelids which masked enophthalmos. Axial and coronal CT scans demonstrated: 1) the orbital fracture, 2) the degree of enophthalmos, 3) intraorbital soft tissue, such as incarcerated or prolapsed ocular muscles, 4) intraorbital hemorrhage, 5) the anatomical relation of the orbital fracture to the lacrimal canal, the trochlea, and the trigeminal nerve, and 6) the lesions of the paranasal sinus and the intracranial cavity. CT scans play an important role in determining what surgical procedures might best be employed. Pure blowout fractures were classified by CT scans into these four types: 1) incarcerating linear fracture, 2) trapdoor fracture, 3) punched-out fracture, and 4) broad fracture. Cases with severe head injury should be examined to see whether or not blowout fracture is present. If the patients are to hope to return to society, a blowout fracture should be treated as soon as possible. (author).
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Review of flow rate estimates of the Deepwater Horizon oil spill
McNutt, Marcia K.; Camilli, Rich; Crone, Timothy J.; Guthrie, George D.; Hsieh, Paul A.; Ryerson, Thomas B.; Savas, Omer; Shaffer, Frank
2012-01-01
The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ observations of the leaking well with a time-dependent flow rate model derived from pressure readings taken after the Macondo well was shut in for the well integrity test. Model simulations also proved valuable for predicting the effect of partial deployment of the blowout preventer rams on flow rate. Taken together, the scientific analyses support flow rates in the range of ~50,000–70,000 barrels/d, perhaps modestly decreasing over the duration of the oil spill, for a total release of ~5.0 million barrels of oil, not accounting for BP's collection effort. By quantifying the amount of oil at different locations (wellhead, ocean surface, and atmosphere), we conclude that just over 2 million barrels of oil (after accounting for containment) and all of the released methane remained in the deep sea. By better understanding the fate of the hydrocarbons, the total discharge can be partitioned into separate components that pose threats to deep sea vs. coastal ecosystems, allowing responders in future events to scale their actions accordingly.
Review of flow rate estimates of the Deepwater Horizon oil spill.
McNutt, Marcia K; Camilli, Rich; Crone, Timothy J; Guthrie, George D; Hsieh, Paul A; Ryerson, Thomas B; Savas, Omer; Shaffer, Frank
2012-12-11
The unprecedented nature of the Deepwater Horizon oil spill required the application of research methods to estimate the rate at which oil was escaping from the well in the deep sea, its disposition after it entered the ocean, and total reservoir depletion. Here, we review what advances were made in scientific understanding of quantification of flow rates during deep sea oil well blowouts. We assess the degree to which a consensus was reached on the flow rate of the well by comparing in situ observations of the leaking well with a time-dependent flow rate model derived from pressure readings taken after the Macondo well was shut in for the well integrity test. Model simulations also proved valuable for predicting the effect of partial deployment of the blowout preventer rams on flow rate. Taken together, the scientific analyses support flow rates in the range of ∼50,000-70,000 barrels/d, perhaps modestly decreasing over the duration of the oil spill, for a total release of ∼5.0 million barrels of oil, not accounting for BP's collection effort. By quantifying the amount of oil at different locations (wellhead, ocean surface, and atmosphere), we conclude that just over 2 million barrels of oil (after accounting for containment) and all of the released methane remained in the deep sea. By better understanding the fate of the hydrocarbons, the total discharge can be partitioned into separate components that pose threats to deep sea vs. coastal ecosystems, allowing responders in future events to scale their actions accordingly.
Blowout brought under control in Gulf of Mexico
International Nuclear Information System (INIS)
Anon.
1992-01-01
This paper reports that Greenhill Petroleum Corp., Houston, killed a well blowout Oct. 9 and began cleaning up oil spilled into Timbalier Bay off La Fourche Parish, La. Development well No. 250 in Timbalier Bay field blew out Sept. 29 while Blake Drilling and Workover Co., Belle Chasse, La., was trying to recomplete it in a deeper zone. Fire broke out as Boots and Coots Inc., Houston, was positioning control equipment at the wellhead. State and federal oil spill response officials estimated the uncontrolled flow of well No. 250 at 1,400 b/d of oil. Coast Guard officials on Oct. 8 upgraded the blowout to a major spill, after deciding that at least 2,500 bbl of oil had gone into the water
Simulation of a SAGD well blowout using a reservoir-wellbore coupled simulator
Energy Technology Data Exchange (ETDEWEB)
Walter, J.; Vanegas, P.; Cunha, L.B. [Alberta Univ., Edmonton, AB (Canada); Worth, D.J. [C-FER Technologies, Edmonton, AB (Canada); Crepin, S. [Petrocedeno, Caracas (Venezuela)
2008-10-15
Single barrier completion systems are typically used in SAGD projects due to the lack of equipment suitable for high temperature SAGD downhole environments. This study used a wellbore and reservoir coupled thermal simulator tool to investigate the blowout behaviour of a steam assisted gravity drainage (SAGD) well pair when the safety barrier has failed. Fluid flow pressure drop through the wellbore and heat losses between the wellbore and the reservoir were modelled using a discretized wellbore option and a semi-analytical model. The fully coupled mechanistic model accounted for the simultaneous transient pressure and temperature variations along the wellbore and the reservoir. The simulations were used to predict flowing potential and fluid compositions of both wells in a SAGD well pair under various flowing conditions. Blowout scenarios were created for 3 different points in the well pair's life. Three flow paths during the blowout were evaluated for both the production and injection wells. Results of the study were used to conduct a comparative risk assessment between a double barrier and a single barrier completion. The modelling study confirmed that both the injection and production wells had the potential for blowouts lasting significant periods of time, with liquid rates over 50 times the normal production liquid rates. The model successfully predicted the blowout flow potential of the SAGD well pairs. 8 refs., 3 tabs., 18 figs.
Ren, Shaoran; Liu, Yanmin; Gong, Zhiwu; Yuan, Yujie; Yu, Lu; Wang, Yanyong; Xu, Yan; Deng, Junyu
2018-02-01
In this study, we applied a two-phase flow model to simulate water and sand blowout processes when penetrating shallow water flow (SWF) formations during deepwater drilling. We define `sand' as a pseudo-component with high density and viscosity, which can begin to flow with water when a critical pressure difference is attained. We calculated the water and sand blowout rates and analyzed the influencing factors from them, including overpressure of the SWF formation, as well as its zone size, porosity and permeability, and drilling speed (penetration rate). The obtained data can be used for the quantitative assessment of the potential severity of SWF hazards. The results indicate that overpressure of the SWF formation and its zone size have significant effects on SWF blowout. A 10% increase in the SWF formation overpressure can result in a more than 90% increase in the cumulative water blowout and a 150% increase in the sand blowout when a typical SWF sediment is drilled. Along with the conventional methods of well flow and pressure control, chemical plugging, and the application of multi-layer casing, water and sand blowouts can be effectively reduced by increasing the penetration rate. As such, increasing the penetration rate can be a useful measure for controlling SWF hazards during deepwater drilling.
The Neutral Interest Rate: Estimates for Chile
Rodrigo Fuentes S; Fabián Gredig U.
2008-01-01
To estimate the neutral real interest rate for Chile, we use a variety of methods that can be classified into three categories: those derived from economic theory, the neutral rate implicit in financial assets, and statistical procedures using macroeconomic data. We conclude that the neutral rate is not constant over time, but it is closely related with—though not equivalent to—the potential GDP growth rate. The application of the different methods yields fairly similar results. The neutral r...
Cladoceran birth and death rates estimates
Gabriel, Wilfried; Taylor, B. E.; Kirsch-Prokosch, Susanne
1987-01-01
I. Birth and death rates of natural cladoceran populations cannot be measured directly. Estimates of these population parameters must be calculated using methods that make assumptions about the form of population growth. These methods generally assume that the population has a stable age distribution. 2. To assess the effect of variable age distributions, we tested six egg ratio methods for estimating birth and death rates with data from thirty-seven laboratory populations of Daphnia puli...
Flux rope breaking and formation of a rotating blowout jet
Joshi, Navin Chandra; Nishizuka, Naoto; Filippov, Boris; Magara, Tetsuya; Tlatov, Andrey G.
2018-05-01
We analysed a small flux rope eruption converted into a helical blowout jet in a fan-spine configuration using multiwavelength observations taken by Solar Dynamics Observatory, which occurred near the limb on 2016 January 9. In our study, first, we estimated the fan-spine magnetic configuration with the potential-field calculation and found a sinistral small filament inside it. The filament along with the flux rope erupted upwards and interacted with the surrounding fan-spine magnetic configuration, where the flux rope breaks in the middle section. We observed compact brightening, flare ribbons, and post-flare loops underneath the erupting filament. The northern section of the flux rope reconnected with the surrounding positive polarity, while the southern section straightened. Next, we observed the untwisting motion of the southern leg, which was transformed into a rotating helical blowout jet. The sign of the helicity of the mini-filament matches the one of the rotating jets. This is consistent with recent jet models presented by Adams et al. and Sterling et al. We focused on the fine thread structure of the rotating jet and traced three blobs with the speed of 60-120 km s- 1, while the radial speed of the jet is ˜400 km s- 1. The untwisting motion of the jet accelerated plasma upwards along the collimated outer spine field lines, and it finally evolved into a narrow coronal mass ejection at the height of ˜9Rsun. On the basis of detailed analysis, we discussed clear evidence of the scenario of the breaking of the flux rope and the formation of the helical blowout jet in the fan-spine magnetic configuration.
Bayesian estimation of dose rate effectiveness
International Nuclear Information System (INIS)
Arnish, J.J.; Groer, P.G.
2000-01-01
A Bayesian statistical method was used to quantify the effectiveness of high dose rate 137 Cs gamma radiation at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice. The Bayesian approach considers both the temporal and dose dependence of radiation carcinogenesis and total mortality. This paper provides the first direct estimation of dose rate effectiveness using Bayesian statistics. This statistical approach provides a quantitative description of the uncertainty of the factor characterising the dose rate in terms of a probability density function. The results show that a fixed dose from 137 Cs gamma radiation delivered at a high dose rate is more effective at inducing fatal mammary tumours and increasing the overall mortality rate in BALB/c female mice than the same dose delivered at a low dose rate. (author)
Epistemic uncertainties when estimating component failure rate
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Mavko, B.; Kljenak, I.
2000-01-01
A method for specific estimation of a component failure rate, based on specific quantitative and qualitative data other than component failures, was developed and is described in the proposed paper. The basis of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas likelihood is built using fuzzy logic theory. With the proposed method, the component failure rate estimation is based on a much larger quantity of information compared to the presently used classical methods. Consequently, epistemic uncertainties, which are caused by lack of knowledge about a component or phenomenon are reduced. (author)
Organized investigation expedites insurance claims following a blowout
International Nuclear Information System (INIS)
Armstreet, R.
1996-01-01
Various types of insurance policies cover blowouts to different degrees, and a proper understanding of the incident and the coverage can expedite the adjustment process. Every well control incident, and the claim arising therefrom, has a unique set of circumstances which must be analyzed thoroughly. A blowout incident, no matter what size or how severe, can have an emotional impact on all who become involved. Bodily injuries or death of friends and coworkers can result in additional stress following a blowout. Thus, it is important that all parties involved remain mindful of sensitive matters when investigating a blowout. This paper reviews the definition of a blowout based on insurance procedures and claims. It reviews blowout expenses and contractor cost and accepted well control policies. Finally, it reviews the investigation procedures normally followed by an agent and the types of information requested from the operator
Estimation of dose from chromosome aberration rate
International Nuclear Information System (INIS)
Li Deping
1990-01-01
The methods and skills of evaluating dose from correctly scored shromsome aberration rate are presented, and supplemented with corresponding BASIC computer code. The possibility and preventive measures of excessive probability of missing score of the aberrations in some of the current routine score methods are discussed. The use of dose-effect relationship with exposure time correction factor G in evaluating doses and their confidence intervals, dose estimation in mixed n-γ exposure, and identification of high by nonuniform acute exposure to low LET radiation and its dose estimation are discussed in more detail. The difference of estimated dose due to whether the interaction between subleisoms produced by n and γ have been taken into account is examined. In fitting the standard dose-aberration rate curve, proper weighing of experiment points and comparison with commonly accepted values are emphasised, and the coefficient of variation σ y √y of the aberration rate y as a function of dose and exposure time is given. In appendix I and II, the dose-aberration rate formula is derived from dual action theory, and the time variation of subleisom is illustrated and in appendix III, the estimation of dose from scores of two different types of aberrations (of other related score) is illustrated. Two computer codes are given in appendix IV, one is a simple code, the other a complete code, including the fitting of standard curve. the skills of using compressed data storage, and the production of simulated 'data ' for testing the curve fitting procedure are also given
The effects of the lodgepole sour gas well blowout on coniferous tree growth
International Nuclear Information System (INIS)
Baker, K.A.
1991-01-01
A dendrochronological study was used to evaluate growth impacts on White Spruce (Picea glauca (Moench) resulting from the 1982 Lodgepole sour gas well blowout. Stem analysis was evaluated from four ecologically similar monitoring sites located on a 10 kilometre downwind gradient and compared to a control site. Incremental volume was calculated, standardized using running mean filters and analyzed using one-way ANOVA. Pre and post-blowout growth trends were analyzed between sites and were also evaluated over a height profile in order to assess growth impact variability within individual trees. Growth reductions at the two sites closest the wellhead were statistically significant for five post-blowout years. Growth at these condensate impacted sites was reduced to 9.8% and 38.1% in 1983. Differences in growth reductions reflect a gradient of effects and a dose-response relationship. Recovery of surviving trees has been rapid but is leveling off at approximately 80% of pre-blowout growth. growth reductions were greater and recovery rates slower than those previously predicted by other authors. Statistically significant differences in height profile growth responses were limited to the upper portions of the trees. Growth rates over a tree height profile ranged from 10% less to 50% more than growth rates observed at a 1.3 metres. Analytical methodologies detected and described growth differences over a height profile but a larger sample size was desirable. As is always the case in catastrophic events, obtaining pre-event baseline data is often difficult. Dendrochronological methods described in this paper offer techniques for determining pre-blowout growth and monitoring impacts and recovery in forested areas
Estimating Glomerular Filtration Rate in Older People
Directory of Open Access Journals (Sweden)
Sabrina Garasto
2014-01-01
Full Text Available We aimed at reviewing age-related changes in kidney structure and function, methods for estimating kidney function, and impact of reduced kidney function on geriatric outcomes, as well as the reliability and applicability of equations for estimating glomerular filtration rate (eGFR in older patients. CKD is associated with different comorbidities and adverse outcomes such as disability and premature death in older populations. Creatinine clearance and other methods for estimating kidney function are not easy to apply in older subjects. Thus, an accurate and reliable method for calculating eGFR would be highly desirable for early detection and management of CKD in this vulnerable population. Equations based on serum creatinine, age, race, and gender have been widely used. However, these equations have their own limitations, and no equation seems better than the other ones in older people. New equations specifically developed for use in older populations, especially those based on serum cystatin C, hold promises. However, further studies are needed to definitely accept them as the reference method to estimate kidney function in older patients in the clinical setting.
BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES
Energy Technology Data Exchange (ETDEWEB)
Iliadis, C.; Anderson, K. S. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States); Coc, A. [Centre de Sciences Nucléaires et de Sciences de la Matière (CSNSM), CNRS/IN2P3, Univ. Paris-Sud, Université Paris–Saclay, Bâtiment 104, F-91405 Orsay Campus (France); Timmes, F. X.; Starrfield, S., E-mail: iliadis@unc.edu [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1504 (United States)
2016-11-01
The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.
Estimation of social discount rate for Lithuania
Directory of Open Access Journals (Sweden)
Vilma Kazlauskiene
2016-09-01
Full Text Available Purpose of the article: The paper seeks to analyse the problematics of estimation of the social discount rate (SDR. The SDR is the critical parameter of cost-benefit analysis, which allows calculating the present value of cost and the benefit of public sector investment projects. Incorrect choice of the SDR can lead to the realisation of ineffective public project or conversely, cost-effective project will be rejected. The relevance of this problem analysis is determined by discussions and different viewpoints of scientists on the choice of the most appropriate approach to determine the SDR and absence of methodically based the SDR on the national level of Lithuania. Methodology/methods: The research is performed by the scientific and methodical literature analysis, systematization, time series and regression analysis. Scientific aim: The aim of the article is to calculate the SDR based on the statistical data of Lithuania. Findings: The analysis of methods of SDR determination, as well as the researches performed by foreign researchers, allows stating that the social rate of time preference (SRTP approach is the most appropriate. The SDR, calculated by the SRTP approach, reflects the main purpose of public investment projects, i.e. to enhance social benefit for society, the best. The analyses of SDR determination practice of the foreign countries shows that the SDR level should not be universal for all states. Each country should calculate the SDR based on its own data and apply it for the assessment of public projects. Conclusions: The calculated SDR for Lithuania using the SRTP approach varies between 3.5% and 4.3%. Although it is lower than 5% that is offered by European Commission, this rate is based on the statistical data of Lithuania and should be used for the assessment of the national public projects. Applicatio
19 CFR 159.38 - Rates for estimated duties.
2010-04-01
... TREASURY (CONTINUED) LIQUIDATION OF DUTIES Conversion of Foreign Currency § 159.38 Rates for estimated duties. For purposes of calculating estimated duties, the port director shall use the rate or rates... 19 Customs Duties 2 2010-04-01 2010-04-01 false Rates for estimated duties. 159.38 Section 159.38...
Stability Control of Vehicle Emergency Braking with Tire Blowout
Chen, Qingzhang; Liu, Youhua; Li, Xuezhi
2014-01-01
For the stability control and slowing down the vehicle to a safe speed after tire failure, an emergency automatic braking system with independent intellectual property is developed. After the system has received a signal of tire blowout, the automatic braking mode of the vehicle is determined according to the position of the failure tire and the motion state of vehicle, and a control strategy for resisting tire blowout additional yaw torque and deceleration is designed to slow down vehicle to...
Development of 3000 m Subsea Blowout Preventer Experimental Prototype
Cai, Baoping; Liu, Yonghong; Huang, Zhiqian; Ma, Yunpeng; Zhao, Yubin
2017-12-01
A subsea blowout preventer experimental prototype is developed to meet the requirement of training operators, and the prototype consists of hydraulic control system, electronic control system and small-sized blowout preventer stack. Both the hydraulic control system and the electronic system are dual-mode redundant systems. Each system works independently and is switchable when there are any malfunctions. And it significantly improves the operation reliability of the equipment.
Fluid Mechanics of Lean Blowout Precursors in Gas Turbine Combustors
Directory of Open Access Journals (Sweden)
T. M. Muruganandam
2012-03-01
Full Text Available Understanding of lean blowout (LBO phenomenon, along with the sensing and control strategies could enable the gas turbine combustor designers to design combustors with wider operability regimes. Sensing of precursor events (temporary extinction-reignition events based on chemiluminescence emissions from the combustor, assessing the proximity to LBO and using that data for control of LBO has already been achieved. This work describes the fluid mechanic details of the precursor dynamics and the blowout process based on detailed analysis of near blowout flame behavior, using simultaneous chemiluminescence and droplet scatter observations. The droplet scatter method represents the regions of cold reactants and thus help track unburnt mixtures. During a precursor event, it was observed that the flow pattern changes significantly with a large region of unburnt mixture in the combustor, which subsequently vanishes when a double/single helical vortex structure brings back the hot products back to the inlet of the combustor. This helical pattern is shown to be the characteristic of the next stable mode of flame in the longer combustor, stabilized by double helical vortex breakdown (VBD mode. It is proposed that random heat release fluctuations near blowout causes VBD based stabilization to shift VBD modes, causing the observed precursor dynamics in the combustor. A complete description of the evolution of flame near the blowout limit is presented. The description is consistent with all the earlier observations by the authors about precursor and blowout events.
Sibling rivalry : Estimating cannibalization rates for innovations
van Heerde, H.J.; Srinivasan, S.; Dekimpe, M.G.
2012-01-01
To evaluate the success of a new product, managers need to determine how much of its new demand is due to cannibalizing the company’s other products, rather than drawing from competition or generating primary demand. A new model allows managers to estimate cannibalization effects and to calculate
Estimation of restaurant solid waste generation rates
International Nuclear Information System (INIS)
Heck, H.H.; Major, I.
2002-01-01
Most solid waste utilities try to create a billing schedule that is proportional to solid waste generation rates. This research was trying to determine if the current billing rate structure was appropriate or if a different rate structure should be implemented. A multiple regression model with forward stepwise addition was developed which accurately predicts weekly solid waste generation rates for restaurants. The model was based on a study of daily solid waste generation at twenty-one different businesses. The weight and volume of solid waste generated was measure daily for two weeks during the winter and two weeks during the summer. Researchers followed the collection truck and measured the volume and weight of the container contents. Data was collected on the following independent variables describing each establishment; weight of waste per collection, volume per collection, container utilization factor, building area, contract haulers bill, yearly property tax, yearly solid waste tax, average number of collections per week, type of restaurant, modal number of collections per week, storage container size, waste density, number of employees, number of hours open per week, and weekly collection capacity (collections per week times storage container size). Independent variables were added to the regression equation based on their partial correlation coefficient and confidence level. The regression equations developed had correlation coefficients of 0.87 to 1.00, which was much better than the correlation coefficient (0.84) of an existing model DeGeare and Ongerth (1971) and a correlation coefficient of 0.54 based on the current solid waste disposal tax. (author)
Estimating the tumble rates of galaxy halos
International Nuclear Information System (INIS)
Simonson, G.F.; Tohline, J.E.
1983-01-01
It has previously been demonstrated that cold gas in a static spheroidal galaxy will damp to a preferred plane, in which the angular momentum vector of the gas is aligned with the symmetry axis of the potential, through dissipative processes. We show now that, if the same galaxy rigidly tumbles about a nonsymmetry axis, the preferred orientation of the gas can become a permanently and smoothly warped sheet, in which rings of gas at large radii may be fully orthogonal to those near the galaxy's core. Detailed numerical orbit calculations closely match an analytic prediction made previously for the structure of the warp. This structure depends primarily on the eccentricity, density profile, and tumble rate of the spheroid. We show that the tumble rate can now be determined for a galaxy containing a significantly warped disk. Ordinary observations used in conjunction with graphs such as those we present, yield at least firm lower limits to the tumble periods of these objects. We have applied this method to the two peculiar systems NGC 5128 and NGC 2685 and found that, if they are prolate systems supporting permanently warped gaseous disks, they must tumble with periods near 5 x 10 9 yr and 2 x 10 9 yr respectively. In a preliminary investigation, we also find that the massive, unseen halos surrounding spiral galaxies must tumble with periods longer than or on the same order as those of the elliptical galaxies
Estimating forest conversion rates with annual forest inventory data
Paul C. Van Deusen; Francis A. Roesch
2009-01-01
The rate of land-use conversion from forest to nonforest or natural forest to forest plantation is of interest for forest certification purposes and also as part of the process of assessing forest sustainability. Conversion rates can be estimated from remeasured inventory plots in general, but the emphasis here is on annual inventory data. A new estimator is proposed...
On Estimating Marginal Tax Rates for U.S. States
Reed, W. Robert; Rogers, Cynthia L; Skidmore, Mark
2011-01-01
This paper presents a procedure for generating state-specific time-varying estimates of marginal tax rates (MTRs). Most estimates of MTRs follow a procedure developed by Koester and Kormendi (1989) (K&K). Unfortunately, the time-invariant nature of the K&K estimates precludes their use as explanatory variables in panel data studies with fixed effects. Furthermore, the associated MTR estimates are not explicitly linked to statutory tax parameters. Our approach addresses both shortcomings. Usin...
Estimating market probabilities of future interest rate changes
Hlušek, Martin
2002-01-01
The goal of this paper is to estimate the market consensus forecast of future monetary policy development and to quantify the priced-in probability of interest rate changes for different future time horizons. The proposed model uses the current spot money market yield curve and available money market derivative instruments (forward rate agreements, FRAs) and estimates the market probability of interest rate changes up to a 12-month horizon.
Use of Hedonic Prices to Estimate Capitalization Rate
Gaetano Lisi
2015-01-01
In this paper, a model of income capitalization is developed where hedonic prices play a key role in estimating the going-in capitalization rate. Precisely, the hedonic functions for rental and selling prices are introduced into a basic model of income capitalization. From the modified model, it is possible to derive a direct relationship between hedonic prices and capitalization rate. An advantage of the proposed approach is that estimation of the capitalization rate can be made without cons...
Three-Axis Attitude Estimation Using Rate-Integrating Gyroscopes
Crassidis, John L.; Markley, F. Landis
2016-01-01
Traditionally, attitude estimation has been performed using a combination of external attitude sensors and internal three-axis gyroscopes. There are many studies of three-axis attitude estimation using gyros that read angular rates. Rate-integrating gyros measure integrated rates or angular displacements, but three-axis attitude estimation using these types of gyros has not been as fully investigated. This paper derives a Kalman filtering framework for attitude estimation using attitude sensors coupled with rate- integrating gyroscopes. In order to account for correlations introduced by using these gyros, the state vector must be augmented, compared with filters using traditional gyros that read angular rates. Two filters are derived in this paper. The first uses an augmented state-vector form that estimates attitude, gyro biases, and gyro angular displacements. The second ignores correlations, leading to a filter that estimates attitude and gyro biases only. Simulation comparisons are shown for both filters. The work presented in this paper focuses only on attitude estimation using rate-integrating gyros, but it can easily be extended to other applications such as inertial navigation, which estimates attitude and position.
Prolonged decay of molecular rate estimates for metazoan mitochondrial DNA
Directory of Open Access Journals (Sweden)
Martyna Molak
2015-03-01
Full Text Available Evolutionary timescales can be estimated from genetic data using the molecular clock, often calibrated by fossil or geological evidence. However, estimates of molecular rates in mitochondrial DNA appear to scale negatively with the age of the clock calibration. Although such a pattern has been observed in a limited range of data sets, it has not been studied on a large scale in metazoans. In addition, there is uncertainty over the temporal extent of the time-dependent pattern in rate estimates. Here we present a meta-analysis of 239 rate estimates from metazoans, representing a range of timescales and taxonomic groups. We found evidence of time-dependent rates in both coding and non-coding mitochondrial markers, in every group of animals that we studied. The negative relationship between the estimated rate and time persisted across a much wider range of calibration times than previously suggested. This indicates that, over long time frames, purifying selection gives way to mutational saturation as the main driver of time-dependent biases in rate estimates. The results of our study stress the importance of accounting for time-dependent biases in estimating mitochondrial rates regardless of the timescale over which they are inferred.
Estimating monotonic rates from biological data using local linear regression.
Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R
2017-03-01
Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.
Introduction to State Estimation of High-Rate System Dynamics.
Hong, Jonathan; Laflamme, Simon; Dodson, Jacob; Joyce, Bryan
2018-01-13
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer's convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model.
Estimating the encounter rate variance in distance sampling
Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.
2009-01-01
The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.
The Estimation of the Equilibrium Real Exchange Rate for Romania
Bogdan Andrei Dumitrescu; Vasile Dedu
2009-01-01
This paper aims to estimate the equilibrium real exchange rate for Romania, respectively the real exchange rate consistent with the macroeconomic balance, which is achieved when the economy is operating at full employment and low inflation (internal balance) and has a current account that is sustainable (external balance). This equilibrium real exchange rate is very important for an economy because deviations of the real exchange rate from its equilibrium value can affect the competitiveness ...
Estimating the exceedance probability of rain rate by logistic regression
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Equations to Estimate Creatinine Excretion Rate : The CKD Epidemiology Collaboration
Ix, Joachim H.; Wassel, Christina L.; Stevens, Lesley A.; Beck, Gerald J.; Froissart, Marc; Navis, Gerjan; Rodby, Roger; Torres, Vicente E.; Zhang, Yaping (Lucy); Greene, Tom; Levey, Andrew S.
Background and objectives Creatinine excretion rate (CER) indicates timed urine collection accuracy. Although equations to estimate CER exist, their bias and precision are untested and none simultaneously include age, sex, race, and weight. Design, setting, participants, & measurements Participants
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
Simplifying cardiovascular risk estimation using resting heart rate.
LENUS (Irish Health Repository)
Cooney, Marie Therese
2010-09-01
Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.
Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation
Directory of Open Access Journals (Sweden)
Hezerul Abdul Karim
2004-09-01
Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
A new approach for estimation of component failure rate
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Kljenak, I.
1999-01-01
In the paper, a formal method for component failure rate estimation is described, which is proposed to be used for components, for which no specific numerical data necessary for probabilistic estimation exist. The framework of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas the likelihood distribution is assessed from specific data on component state using principles of fuzzy logic theory. With the proposed method the component failure rate estimation is based on a much larger quantity of information compared to presently used classical methods.(author)
Estimating stutter rates for Y-STR alleles
DEFF Research Database (Denmark)
Andersen, Mikkel Meyer; Olofsson, Jill Katharina; Mogensen, Helle Smidt
2011-01-01
Stutter peaks are artefacts that arise during PCR amplification of short tandem repeats. Stutter peaks are especially important in forensic case work with DNA mixtures. The aim of the study was primarily to estimate the stutter rates of the AmpFlSTR Yfiler kit. We found that the stutter rates...
Estimated Interest Rate Rules: Do they Determine Determinacy Properties?
DEFF Research Database (Denmark)
Jensen, Henrik
2011-01-01
I demonstrate that econometric estimations of nominal interest rate rules may tell little, if anything, about an economy's determinacy properties. In particular, correct inference about the interest-rate response to inflation provides no information about determinacy. Instead, it could reveal...
Serum albumin is an important prognostic factor for carotid blowout syndrome
International Nuclear Information System (INIS)
Lu Hsuehju; Chen Kuowei; Chen Minghuang; Tzeng Chenghwai; Chang Peter Muhsin; Yang Muhhwa; Chu Penyuan; Tai Shyhkuan
2013-01-01
Carotid blowout syndrome is a severe complication of head and neck cancer. High mortality and major neurologic morbidity are associated with carotid blowout syndrome with massive bleeding. Prediction of outcomes for carotid blowout syndrome patients is important for clinicians, especially for patients with the risk of massive bleeding. Between 1 January 2001 and 31 December 2011, 103 patients with carotid blowout syndrome were enrolled in this study. The patients were divided into groups with and without massive bleeding. Prognostic factors were analysed with proportional hazard (Cox) regressions for carotid blowout syndrome-related prognoses. Survival analyses were based on the time from diagnosis of carotid blowout syndrome to massive bleeding and death. Patients with massive bleeding were more likely to have hypoalbuminemia (albumin 1000 cells/μl, P=0.041) and hypoalbuminemia (P=0.010) were important to prognosis. Concurrent chemoradiotherapy (P=0.007), elevated lactate dehydrogenase (>250 U/l; P=0.050), local recurrence (P=0.022) and hypoalbuminemia (P=0.038) were related to poor prognosis in carotid blowout syndrome-related death. In multivariate analysis, best supportive care and hypoalbuminemia were independent factors for both carotid blowout syndrome-related massive bleeding (P=0.000) and carotid blowout syndrome-related death (P=0.013), respectively. Best supportive care and serum albumin are important prognostic factors in carotid blowout syndrome. It helps clinicians to evaluate and provide better supportive care for these patients. (author)
Improved air ventilation rate estimation based on a statistical model
International Nuclear Information System (INIS)
Brabec, M.; Jilek, K.
2004-01-01
A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements
Tyagi, M.; Zulqarnain, M.
2017-12-01
Offshore oil and gas exploration and production operations, involve the use of some of the cutting edge and challenging technologies of the modern time. These technological complex operations involves the risk of major accidents as well, which have been demonstrated by disasters such as the explosion and fire on the UK production platform piper alpha, the Canadian semi-submersible drilling rig Ocean Ranger and the explosion and capsizing of Deepwater horizon rig in the Gulf of Mexico. By conducting Quantitative Risk Assessment (QRA), safety of various operations as well as their associated risks and significance during the entire life phase of an offshore project can be quantitatively estimated. In an underground blowout, the uncontrolled formation fluids from higher pressure formation may charge up shallower overlying low pressure formations or may migrate to sea floor. Consequences of such underground blowouts range from no visible damage at the surface to the complete loss of well, loss of drilling rig, seafloor subsidence or hydrocarbons discharged to the environment. These blowouts might go unnoticed until the over pressured sands, which are the result of charging from higher pressure reservoir due to an underground blowout. Further, engineering formulas used to estimate the fault permeability and thickness are very simple in nature and may add to uncertainty in the estimated parameters. In this study the potential of a deepwater underground blowout are assessed during drilling life phase of a well in Popeye-Genesis field reservoir in the Gulf of Mexico to estimate the time taken to charge a shallower zone to its leak-off test (LOT) value. Parametric simulation results for selected field case show that for relatively high permeability (k = 40mD) fault connecting a deep over-pressured zone to a shallower low-pressure zone of similar reservoir volumes, the time to recharge the shallower zone up to its threshold LOT value is about 135 years. If the ratio of the
Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates
Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.
2015-01-01
Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017
State estimation for networked control systems using fixed data rates
Liu, Qing-Quan; Jin, Fang
2017-07-01
This paper investigates state estimation for linear time-invariant systems where sensors and controllers are geographically separated and connected via a bandwidth-limited and errorless communication channel with the fixed data rate. All plant states are quantised, coded and converted together into a codeword in our quantisation and coding scheme. We present necessary and sufficient conditions on the fixed data rate for observability of such systems, and further develop the data-rate theorem. It is shown in our results that there exists a quantisation and coding scheme to ensure observability of the system if the fixed data rate is larger than the lower bound given, which is less conservative than the one in the literature. Furthermore, we also examine the role that the disturbances have on the state estimation problem in the case with data-rate limitations. Illustrative examples are given to demonstrate the effectiveness of the proposed method.
Estimating the Effects of Exchange Rate Volatility on Export Volumes
Wang, Kai-Li; Barrett, Christopher B.
2007-01-01
This paper takes a new empirical look at the long-standing question of the effect of exchange rate volatility on international trade flows by studying the case of Taiwan's exports to the United States from 1989-1998. In particular, we employ sectoral-level, monthly data and an innovative multivariate GARCH-M estimator with corrections for leptokurtic errors. This estimator allows for the possibility that traders' forward-looking contracting behavior might condition the way in which exchange r...
Automating proliferation rate estimation from Ki-67 histology images
Al-Lahham, Heba Z.; Alomari, Raja S.; Hiary, Hazem; Chaudhary, Vipin
2012-03-01
Breast cancer is the second cause of women death and the most diagnosed female cancer in the US. Proliferation rate estimation (PRE) is one of the prognostic indicators that guide the treatment protocols and it is clinically performed from Ki-67 histopathology images. Automating PRE substantially increases the efficiency of the pathologists. Moreover, presenting a deterministic and reproducible proliferation rate value is crucial to reduce inter-observer variability. To that end, we propose a fully automated CAD system for PRE from the Ki-67 histopathology images. This CAD system is based on a model of three steps: image pre-processing, image clustering, and nuclei segmentation and counting that are finally followed by PRE. The first step is based on customized color modification and color-space transformation. Then, image pixels are clustered by K-Means depending on the features extracted from the images derived from the first step. Finally, nuclei are segmented and counted using global thresholding, mathematical morphology and connected component analysis. Our experimental results on fifty Ki-67-stained histopathology images show a significant agreement between our CAD's automated PRE and the gold standard's one, where the latter is an average between two observers' estimates. The Paired T-Test, for the automated and manual estimates, shows ρ = 0.86, 0.45, 0.8 for the brown nuclei count, blue nuclei count, and proliferation rate, respectively. Thus, our proposed CAD system is as reliable as the pathologist estimating the proliferation rate. Yet, its estimate is reproducible.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Wang, Binbin; Socolofsky, Scott A; Lai, Chris C K; Adams, E Eric; Boufadel, Michel C
2018-06-01
Subsea oil well blowouts and pipeline leaks release oil and gas to the environment through vigorous jets. Predicting the breakup of the released fluids in oil droplets and gas bubbles is critical to predict the fate of petroleum compounds in the marine water column. To predict the gas bubble size in oil well blowouts and pipeline leaks, we observed and quantified the flow behavior and breakup process of gas for a wide range of orifice diameters and flow rates. Flow behavior at the orifice transitions from pulsing flow to continuous discharge as the jet crosses the sonic point. Breakup dynamics transition from laminar to turbulent at a critical value of the Weber number. Very strong pure gas jets and most gas/liquid co-flowing jets exhibit atomization breakup. Bubble sizes in the atomization regime scale with the jet-to-plume transition length scale and follow -3/5 power-law scaling for a mixture Weber number. Copyright © 2018 Elsevier Ltd. All rights reserved.
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Flamingwagon: Athey Wagon braves heat to aid oilpatch blowout response
Energy Technology Data Exchange (ETDEWEB)
Leschart, M.
2002-09-01
Construction of an Athey wagon by Key Safety Ltd at the company's Red Deer, Alberta facilities is announced. At present, the wagon is awaiting only the installation of its fire-prevention system to be ready for action. The last such blowout response equipment was built in Alberta 40 years ago, and when Crestar Energy lost control of a horizontal sour well during a recent drilling in the Little Bow area, a unit built in 1954 and housed in a museum at the Leduc No. 1 historic site in Devon, Alberta, had to be pressed into service to deal with the emergency. While Athey wagons are not always essential to the blow-out control process, the addition of this new piece of well control safety equipment is welcome news, especially in the light of the new knowledge gained, and innovative processes and procedures developed by Canadian companies fighting oil field fires in Kuwait after the Gulf War.
Distributions of component failure rates estimated from LER data
International Nuclear Information System (INIS)
Atwood, C.L.
1985-01-01
Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter gamma distributions. In this study, a more complicated distributional form is considered, a mixture of gammas. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single gamma distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution. 9 refs
Distributions of component failure rates, estimated from LER data
International Nuclear Information System (INIS)
Atwood, C.L.
1985-01-01
Past analyses of Licensee Event Report (LER) data have noted that component failure rates vary from plant to plant, and have estimated the distributions by two-parameter γ distributions. In this study, a more complicated distributional form is considered, a mixture of γs. This could arise if the plants' failure rates cluster into distinct groups. The method was applied to selected published LER data for diesel generators, pumps, valves, and instrumentation and control assemblies. The improved fits from using a mixture rather than a single γ distribution were minimal, and not statistically significant. There seem to be two possibilities: either explanatory variables affect the failure rates only in a gradual way, not a qualitative way; or, for estimating individual component failure rates, the published LER data have been analyzed to the limit of resolution
Understanding estimated worker absenteeism rates during an influenza pandemic.
Thanner, Meridith H; Links, Jonathan M; Meltzer, Martin I; Scheulen, James J; Kelen, Gabor D
2011-01-01
Published employee absenteeism estimates during an influenza pandemic range from 10 to 40 percent. The purpose of this study was to estimate daily employee absenteeism through the duration of an influenza pandemic and to determine the relative impact of key variables used to derive the estimates. Using the Centers for Disease Control and Prevention's FluWorkLoss program, the authors estimated the number of absent employees on any given day over the course of a simulated 8-week pandemic wave by using varying attack rates. Employee data from a university with a large academic health system were used. Sensitivity of the program outputs to variation in predictor (inputs) values was assessed. Finally, the authors examined and documented the algorithmic sequence of the program. Using a 35 percent attack rate, a total of 47,270 workdays (or 3.4 percent of all available workdays) would be lost over the course of an 8-week pandemic among a population of 35,026 employees. The highest (peak) daily absenteeism estimate was 5.8 percent (minimum 4.8 percent; maximum 7.4 percent). Sensitivity analysis revealed that varying days missed for nonhospitalized illness had the greatest potential effect on peak absence rate (3.1 to 17.2 percent). Peak absence with 15 and 25 percent attack rates were 2.5 percent and 4.2 percent, respectively. The impact of an influenza pandemic on employee availability may be less than originally thought, even with a high attack rate. These data are generalizable and are not specific to institutions of higher education or medical centers. Thus, these findings provide realistic and useful estimates for influenza pandemic planning for most organizations.
Estimation of flow rates through intergranular stress corrosion cracks
International Nuclear Information System (INIS)
Collier, R.P.; Norris, D.M.
1984-01-01
Experimental studies of critical two-phase water flow, through simulated and actual intergranular stress corrosion cracks, were performed to obtain data to evaluate a leak flow rate model and investigate acoustic transducer effectiveness in detecting and sizing leaks. The experimental program included a parametric study of the effects of crack geometry, fluid stagnation pressure and temperature, and crack surface roughness on leak flow rate. In addition, leak detection, location, and leak size estimation capabilities of several different acoustic transducers were evaluated as functions of leak rate and transducer position. This paper presents flow rate data for several different cracks and fluid conditions. It also presents the minimum flow rate detected with the acoustic sensors and a relationship between acoustic signal strength and leak flow rate
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
Development and verification of deep-water blowout models
International Nuclear Information System (INIS)
Johansen, Oistein
2003-01-01
Modeling of deep-water releases of gas and oil involves conventional plume theory in combination with thermodynamics and mass transfer calculations. The discharges can be understood in terms of multiphase plumes, where gas bubbles and oil droplets may separate from the water phase of the plume and rise to the surface independently. The gas may dissolve in the ambient water and/or form gas hydrates--a solid state of water resembling ice. All these processes will tend to deprive the plume as such of buoyancy, and in stratified water the plume rise will soon terminate. Slick formation will be governed by the surfacing of individual oil droplets in a depth and time variable current. This situation differs from the conditions observed during oil-and-gas blowouts in shallow and moderate water depths. In such cases, the bubble plume has been observed to rise to the surface and form a strong radial flow that contributes to a rapid spreading of the surfacing oil. The theories and behaviors involved in deepwater blowout cases are reviewed and compared to those for the shallow water blowout cases
Lidar method to estimate emission rates from extended sources
Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...
Estimates of outcrossing rates in Moringa oleifera using Amplified ...
African Journals Online (AJOL)
The mating system in plant populations is influenced by genetic and environmental factors. Proper estimates of the outcrosing rates are often required for planning breeding programmes, conservation and management of tropical trees. However, although Moringa oleifera is adapted to a mixed mating system, the proportion ...
Cross sectional efficient estimation of stochastic volatility short rate models
Danilov, Dmitri; Mandal, Pranab K.
2001-01-01
We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this
Cross sectional efficient estimation of stochastic volatility short rate models
Danilov, Dmitri; Mandal, Pranab K.
2002-01-01
We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent
Estimating Ads’ Click through Rate with Recurrent Neural Network
Directory of Open Access Journals (Sweden)
Chen Qiao-Hong
2016-01-01
Full Text Available With the development of the Internet, online advertising spreads across every corner of the world, the ads' click through rate (CTR estimation is an important method to improve the online advertising revenue. Compared with the linear model, the nonlinear models can study much more complex relationships between a large number of nonlinear characteristics, so as to improve the accuracy of the estimation of the ads’ CTR. The recurrent neural network (RNN based on Long-Short Term Memory (LSTM is an improved model of the feedback neural network with ring structure. The model overcomes the problem of the gradient of the general RNN. Experiments show that the RNN based on LSTM exceeds the linear models, and it can effectively improve the estimation effect of the ads’ click through rate.
Estimated erosion rate at the SRP burial ground
International Nuclear Information System (INIS)
Horton, J.H.; Wilhite, E.L.
1978-04-01
The rate of soil erosion at the Savannah River Plant (SRP) burial ground can be calculated by means of the universal soil loss equation. Erosion rates estimated by the equation are more suitable for long-term prediction than those which could be measured with a reasonable effort in field studies. The predicted erosion rate at the SRP burial ground ranges from 0.0007 cm/year under stable forest cover to 0.38 cm/year if farmed with cultivated crops. These values correspond to 170,000 and 320 years, respectively, to expose waste buried 4 ft deep
An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation
Energy Technology Data Exchange (ETDEWEB)
Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.
An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation
International Nuclear Information System (INIS)
Kim, S. K.; Kang, G. B.; Ko, W. I.
2013-01-01
Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole
A Pulse Rate Estimation Algorithm Using PPG and Smartphone Camera.
Siddiqui, Sarah Ali; Zhang, Yuan; Feng, Zhiquan; Kos, Anton
2016-05-01
The ubiquitous use and advancement in built-in smartphone sensors and the development in big data processing have been beneficial in several fields including healthcare. Among the basic vitals monitoring, pulse rate monitoring is the most important healthcare necessity. A multimedia video stream data acquired by built-in smartphone camera can be used to estimate it. In this paper, an algorithm that uses only smartphone camera as a sensor to estimate pulse rate using PhotoPlethysmograph (PPG) signals is proposed. The results obtained by the proposed algorithm are compared with the actual pulse rate and the maximum error found is 3 beats per minute. The standard deviation in percentage error and percentage accuracy is found to be 0.68 % whereas the average percentage error and percentage accuracy is found to be 1.98 % and 98.02 % respectively.
A Fast Soft Bit Error Rate Estimation Method
Directory of Open Access Journals (Sweden)
Ait-Idir Tarik
2010-01-01
Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.
Estimation of the rate of volcanism on Venus from reaction rate measurements
Fegley, Bruce, Jr.; Prinn, Ronald G.
1989-01-01
Laboratory rate data for the reaction between SO2 and calcite to form anhydrite are presented. If this reaction rate represents the SO2 reaction rate on Venus, then all SO2 in the Venusian atmosphere will disappear in 1.9 Myr unless volcanism replenishes the lost SO2. The required volcanism rate, which depends on the sulfur content of the erupted material, is in the range 0.4-11 cu km of magma erupted per year. The Venus surface composition at the Venera 13, 14, and Vega 2 landing sites implies a volcanism rate of about 1 cu km/yr. This geochemically estimated rate can be used to determine if either (or neither) of two discordant geophysically estimated rates is correct. It also suggests that Venus may be less volcanically active than the earth.
Estimating contaminant discharge rates from stabilized uranium tailings embankments
International Nuclear Information System (INIS)
Weber, M.F.
1986-01-01
Estimates of contaminant discharge rates from stabilized uranium tailings embankments are essential in evaluating long-term impacts of tailings disposal on groundwater resources. Contaminant discharge rates are a function of water flux through tailings covers, the mass and distribution of tailings, and the concentrations of contaminants in percolating pore fluids. Simple calculations, laboratory and field testing, and analytical and numerical modeling may be used to estimate water flux through variably-saturated tailings under steady-state conditions, which develop after consolidation and dewatering have essentially ceased. Contaminant concentrations in water discharging from the tailings depend on tailings composition, leachability and solubility of contaminants, geochemical conditions within the embankment, tailings-water interactions, and flux of water through the embankment. These concentrations may be estimated based on maximum reported concentrations, pore water concentrations, extrapolations of column leaching data, or geochemical equilibria and reaction pathway modeling. Attempts to estimate contaminant discharge rates should begin with simple, conservative calculations and progress to more-complicated approaches, as necessary
Rating curve estimation of nutrient loads in Iowa rivers
Stenback, G.A.; Crumpton, W.G.; Schilling, K.E.; Helmers, M.J.
2011-01-01
Accurate estimation of nutrient loads in rivers and streams is critical for many applications including determination of sources of nutrient loads in watersheds, evaluating long-term trends in loads, and estimating loading to downstream waterbodies. Since in many cases nutrient concentrations are measured on a weekly or monthly frequency, there is a need to estimate concentration and loads during periods when no data is available. The objectives of this study were to: (i) document the performance of a multiple regression model to predict loads of nitrate and total phosphorus (TP) in Iowa rivers and streams; (ii) determine whether there is any systematic bias in the load prediction estimates for nitrate and TP; and (iii) evaluate streamflow and concentration factors that could affect the load prediction efficiency. A commonly cited rating curve regression is utilized to estimate riverine nitrate and TP loads for rivers in Iowa with watershed areas ranging from 17.4 to over 34,600km2. Forty-nine nitrate and 44 TP datasets each comprising 5-22years of approximately weekly to monthly concentrations were examined. Three nitrate data sets had sample collection frequencies averaging about three samples per week. The accuracy and precision of annual and long term riverine load prediction was assessed by direct comparison of rating curve load predictions with observed daily loads. Significant positive bias of annual and long term nitrate loads was detected. Long term rating curve nitrate load predictions exceeded observed loads by 25% or more at 33% of the 49 measurement sites. No bias was found for TP load prediction although 15% of the 44 cases either underestimated or overestimate observed long-term loads by more than 25%. The rating curve was found to poorly characterize nitrate and phosphorus variation in some rivers. ?? 2010 .
Estimation of Stormwater Interception Rate for various LID Facilities
Kim, S.; Lee, O.; Choi, J.
2017-12-01
In this study, the stormwater interception rate is proposed to apply in the design of LID facilities. For this purpose, EPA-SWMM is built with some areas of Noksan National Industrial Complex where long-term observed stormwater data were monitored and stormwater interception rates for various design capacities of various LID facilities are estimated. While the sensitivity of stormwater interception rate according to design specifications of bio-retention and infiltration trench facilities is not large, the sensitivity of stormwater interception rate according to local rainfall characteristics is relatively big. As a result of comparing the present rainfall interception rate estimation method which is officially operated in Korea with the one proposed in this study, it will be presented that the present method is highly likely to overestimate the performance of the bio-retention and infiltration trench facilities. Finally, a new stormwater interception rate formulas for the bio-retention and infiltration trench LID facilities will be proposed. Acknowledgement This research was supported by a grant (2016000200002) from Public Welfare Technology Development Program funded by Ministry of Environment of Korean government.
LENUS (Irish Health Repository)
Francis, Dawn L
2011-03-01
The adenoma detection rate (ADR) is a quality benchmark for colonoscopy. Many practices find it difficult to determine the ADR because it requires a combination of endoscopic and histologic findings. It may be possible to apply a conversion factor to estimate the ADR from the polyp detection rate (PDR).
Pooling overdispersed binomial data to estimate event rate.
Young-Xu, Yinong; Chan, K Arnold
2008-08-19
The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.
Pooling overdispersed binomial data to estimate event rate
Directory of Open Access Journals (Sweden)
Chan K Arnold
2008-08-01
Full Text Available Abstract Background The beta-binomial model is one of the methods that can be used to validly combine event rates from overdispersed binomial data. Our objective is to provide a full description of this method and to update and broaden its applications in clinical and public health research. Methods We describe the statistical theories behind the beta-binomial model and the associated estimation methods. We supply information about statistical software that can provide beta-binomial estimations. Using a published example, we illustrate the application of the beta-binomial model when pooling overdispersed binomial data. Results In an example regarding the safety of oral antifungal treatments, we had 41 treatment arms with event rates varying from 0% to 13.89%. Using the beta-binomial model, we obtained a summary event rate of 3.44% with a standard error of 0.59%. The parameters of the beta-binomial model took the values of 1.24 for alpha and 34.73 for beta. Conclusion The beta-binomial model can provide a robust estimate for the summary event rate by pooling overdispersed binomial data from different studies. The explanation of the method and the demonstration of its applications should help researchers incorporate the beta-binomial method as they aggregate probabilities of events from heterogeneous studies.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
GARDEC, Estimation of dose-rates reduction by garden decontamination
International Nuclear Information System (INIS)
Togawa, Orihiko
2006-01-01
1 - Description of program or function: GARDEC estimates the reduction of dose rates by garden decontamination. It provides the effect of different decontamination Methods, the depth of soil to be considered, dose-rate before and after decontamination and the reduction factor. 2 - Methods: This code takes into account three Methods of decontamination : (i)digging a garden in a special way, (ii) a removal of the upper layer of soil, and (iii) covering with a shielding layer of soil. The dose-rate conversion factor is defined as the external dose-rate, in the air, at a given height above the ground from a unit concentration of a specific radionuclide in each soil layer
A nonparametric mixture model for cure rate estimation.
Peng, Y; Dear, K B
2000-03-01
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.
Automatic estimation of pressure-dependent rate coefficients.
Allen, Joshua W; Goldsmith, C Franklin; Green, William H
2012-01-21
A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation.
Fast Rate Estimation for RDO Mode Decision in HEVC
Directory of Open Access Journals (Sweden)
Maxim P. Sharabayko
2014-12-01
Full Text Available The latter-day H.265/HEVC video compression standard is able to provide two-times higher compression efficiency compared to the current industrial standard, H.264/AVC. However, coding complexity also increased. The main bottleneck of the compression process is the rate-distortion optimization (RDO stage, as it involves numerous sequential syntax-based binary arithmetic coding (SBAC loops. In this paper, we present an entropy-based RDO estimation technique for H.265/HEVC compression, instead of the common approach based on the SBAC. Our RDO implementation reduces RDO complexity, providing an average bit rate overhead of 1.54%. At the same time, elimination of the SBAC from the RDO estimation reduces block interdependencies, thus providing an opportunity for the development of the compression system with parallel processing of multiple blocks of a video frame.
Simple estimate of fission rate during JCO criticality accident
Energy Technology Data Exchange (ETDEWEB)
Oyamatsu, Kazuhiro [Faculty of Studies on Contemporary Society, Aichi Shukutoku Univ., Nagakute, Aichi (Japan)
2000-03-01
The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10{sup 16} per liter, or 2x10{sup 18} per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)
Simple estimate of fission rate during JCO criticality accident
International Nuclear Information System (INIS)
Oyamatsu, Kazuhiro
2000-01-01
The fission rate during JCO criticality accident is estimated from fission-product (FP) radioactivities in a uranium solution sample taken from the preparation basin 20 days after the accident. The FP radioactivity data are taken from a report by JAERI released in the Accident Investigation Committee. The total fission number is found quite dependent on the FP radioactivities and estimated to be about 4x10 16 per liter, or 2x10 18 per 16 kgU (assuming uranium concentration 278.9 g/liter). On the contrary, the time dependence of the fission rate is rather insensitive to the FP radioactivities. Hence, it is difficult to determine the fission number in the initial burst from the radioactivity data. (author)
Counting and confusion: Bayesian rate estimation with multiple populations
Farr, Will M.; Gair, Jonathan R.; Mandel, Ilya; Cutler, Curt
2015-01-01
We show how to obtain a Bayesian estimate of the rates or numbers of signal and background events from a set of events when the shapes of the signal and background distributions are known, can be estimated, or approximated; our method works well even if the foreground and background event distributions overlap significantly and the nature of any individual event cannot be determined with any certainty. We give examples of determining the rates of gravitational-wave events in the presence of background triggers from a template bank when noise parameters are known and/or can be fit from the trigger data. We also give an example of determining globular-cluster shape, location, and density from an observation of a stellar field that contains a nonuniform background density of stars superimposed on the cluster stars.
Precise estimates of mutation rate and spectrum in yeast
Zhu, Yuan O.; Siegal, Mark L.; Hall, David W.; Petrov, Dmitri A.
2014-01-01
Mutation is the ultimate source of genetic variation. The most direct and unbiased method of studying spontaneous mutations is via mutation accumulation (MA) lines. Until recently, MA experiments were limited by the cost of sequencing and thus provided us with small numbers of mutational events and therefore imprecise estimates of rates and patterns of mutation. We used whole-genome sequencing to identify nearly 1,000 spontaneous mutation events accumulated over ∼311,000 generations in 145 diploid MA lines of the budding yeast Saccharomyces cerevisiae. MA experiments are usually assumed to have negligible levels of selection, but even mild selection will remove strongly deleterious events. We take advantage of such patterns of selection and show that mutation classes such as indels and aneuploidies (especially monosomies) are proportionately much more likely to contribute mutations of large effect. We also provide conservative estimates of indel, aneuploidy, environment-dependent dominant lethal, and recessive lethal mutation rates. To our knowledge, for the first time in yeast MA data, we identified a sufficiently large number of single-nucleotide mutations to measure context-dependent mutation rates and were able to (i) confirm strong AT bias of mutation in yeast driven by high rate of mutations from C/G to T/A and (ii) detect a higher rate of mutation at C/G nucleotides in two specific contexts consistent with cytosine methylation in S. cerevisiae. PMID:24847077
Robust efficient estimation of heart rate pulse from video
Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde
2014-01-01
We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294
Estimating Effective Subsidy Rates of Student Aid Programs
Stacey H. CHEN
2008-01-01
Every year millions of high school students and their parents in the US are asked to fill out complicated financial aid application forms. However, few studies have estimated the responsiveness of government financial aid schemes to changes in financial needs of the students. This paper identifies the effective subsidy rate (ESR) of student aid, as defined by the coefficient of financial needs in the regression of financial aid. The ESR measures the proportion of subsidy of student aid under ...
Cluster-collision frequency. II. Estimation of the collision rate
International Nuclear Information System (INIS)
Amadon, A.S.; Marlow, W.H.
1991-01-01
Gas-phase cluster-collision rates, including effects of cluster morphology and long-range intermolecular forces, are calculated. Identical pairs of icosahedral or dodecahedral carbon tetrachloride clusters of 13, 33, and 55 molecules in two different relative orientations were discussed in the preceding paper [Phys. Rev. A 43, 5483 (1991)]: long-range interaction energies were derived based upon (i) exact calculations of the iterated, or many-body, induced-dipole interaction energies for the clusters in two fixed relative orientations; and (ii) bulk, or continuum descriptions (Lifshitz--van der Waals theory), of spheres of corresponding masses and diameters. In this paper, collision rates are calculated according to an exact description of the rates for small spheres interacting via realistic potentials. Utilizing the interaction energies of the preceding paper, several estimates of the collision rates are given by treating the discrete clusters in fixed relative orientations, by computing rotationally averaged potentials for the discrete clusters, and by approximating the clusters as continuum spheres. For the discrete, highly symmetric clusters treated here, the rates using the rotationally averaged potentials closely approximate the fixed-orientation rates and the values of the intercluster potentials for cluster surface separations under 2 A have negligible effect on the overall collision rates. While the 13-molecule cluster-collision rate differs by 50% from the rate calculated as if the cluster were bulk matter, the two larger cluster-collision rates differ by less than 15% from the macroscopic rates, thereby indicating the transition of microscopic to macroscopic behavior
Simple method for the estimation of glomerular filtration rate
Energy Technology Data Exchange (ETDEWEB)
Groth, T [Group for Biomedical Informatics, Uppsala Univ. Data Center, Uppsala (Sweden); Tengstroem, B [District General Hospital, Skoevde (Sweden)
1977-02-01
A simple method is presented for indirect estimation of the glomerular filtration rate from two venous blood samples, drawn after a single injection of a small dose of (/sup 125/I)sodium iothalamate (10 ..mu..Ci). The method does not require exact dosage, as the first sample, taken after a few minutes (t=5 min) after injection, is used to normilize the value of the second sample, which should be taken in between 2 to 4 h after injection. The glomerular filtration rate, as measured by standard insulin clearance, may then be predicted from the logarithm of the normalized value and linear regression formulas with a standard error of estimate of the order of 1 to 2 ml/min/1.73 m/sup 2/. The slope-intercept method for direct estimation of glomerular filtration rate is also evaluated and found to significantly underestimate standard insulin clearance. The normalized 'single-point' method is concluded to be superior to the slope-intercept method and more sophisticated methods using curve fitting technique, with regard to predictive force and clinical applicability.
Phylogenetic estimates of diversification rate are affected by molecular rate variation.
Duchêne, D A; Hua, X; Bromham, L
2017-10-01
Molecular phylogenies are increasingly being used to investigate the patterns and mechanisms of macroevolution. In particular, node heights in a phylogeny can be used to detect changes in rates of diversification over time. Such analyses rest on the assumption that node heights in a phylogeny represent the timing of diversification events, which in turn rests on the assumption that evolutionary time can be accurately predicted from DNA sequence divergence. But there are many influences on the rate of molecular evolution, which might also influence node heights in molecular phylogenies, and thus affect estimates of diversification rate. In particular, a growing number of studies have revealed an association between the net diversification rate estimated from phylogenies and the rate of molecular evolution. Such an association might, by influencing the relative position of node heights, systematically bias estimates of diversification time. We simulated the evolution of DNA sequences under several scenarios where rates of diversification and molecular evolution vary through time, including models where diversification and molecular evolutionary rates are linked. We show that commonly used methods, including metric-based, likelihood and Bayesian approaches, can have a low power to identify changes in diversification rate when molecular substitution rates vary. Furthermore, the association between the rates of speciation and molecular evolution rate can cause the signature of a slowdown or speedup in speciation rates to be lost or misidentified. These results suggest that the multiple sources of variation in molecular evolutionary rates need to be considered when inferring macroevolutionary processes from phylogenies. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Estimating marginal CO2 emissions rates for national electricity systems
International Nuclear Information System (INIS)
Hawkes, A.D.
2010-01-01
The carbon dioxide (CO 2 ) emissions reduction afforded by a demand-side intervention in the electricity system is typically assessed by means of an assumed grid emissions rate, which measures the CO 2 intensity of electricity not used as a result of the intervention. This emissions rate is called the 'marginal emissions factor' (MEF). Accurate estimation of MEFs is crucial for performance assessment because their application leads to decisions regarding the relative merits of CO 2 reduction strategies. This article contributes to formulating the principles by which MEFs are estimated, highlighting the strengths and weaknesses in existing approaches, and presenting an alternative based on the observed behaviour of power stations. The case of Great Britain is considered, demonstrating an MEF of 0.69 kgCO 2 /kW h for 2002-2009, with error bars at +/-10%. This value could reduce to 0.6 kgCO 2 /kW h over the next decade under planned changes to the underlying generation mix, and could further reduce to approximately 0.51 kgCO 2 /kW h before 2025 if all power stations commissioned pre-1970 are replaced by their modern counterparts. Given that these rates are higher than commonly applied system-average or assumed 'long term marginal' emissions rates, it is concluded that maintenance of an improved understanding of MEFs is valuable to better inform policy decisions.
Can we estimate bacterial growth rates from ribosomal RNA content?
Energy Technology Data Exchange (ETDEWEB)
Kemp, P.F.
1995-12-31
Several studies have demonstrated a strong relationship between the quantity of RNA in bacterial cells and their growth rate under laboratory conditions. It may be possible to use this relationship to provide information on the activity of natural bacterial communities, and in particular on growth rate. However, if this approach is to provide reliably interpretable information, the relationship between RNA content and growth rate must be well-understood. In particular, a requisite of such applications is that the relationship must be universal among bacteria, or alternately that the relationship can be determined and measured for specific bacterial taxa. The RNA-growth rate relationship has not been used to evaluate bacterial growth in field studies, although RNA content has been measured in single cells and in bulk extracts of field samples taken from coastal environments. These measurements have been treated as probable indicators of bacterial activity, but have not yet been interpreted as estimators of growth rate. The primary obstacle to such interpretations is a lack of information on biological and environmental factors that affect the RNA-growth rate relationship. In this paper, the available data on the RNA-growth rate relationship in bacteria will be reviewed, including hypotheses regarding the regulation of RNA synthesis and degradation as a function of growth rate and environmental factors; i.e. the basic mechanisms for maintaining RNA content in proportion to growth rate. An assessment of the published laboratory and field data, the current status of this research area, and some of the remaining questions will be presented.
Redefinition and global estimation of basal ecosystem respiration rate
Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.
2011-01-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
Aniseikonia quantification: error rate of rule of thumb estimation.
Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P
1999-01-01
To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.
Estimation of evapotranspiration rate in irrigated lands using stable isotopes
Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg
2013-04-01
Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.
Self expandable polytetrafluoroethylene stent for carotid blowout syndrome.
Tatar, E C; Yildirim, U M; Dündar, Y; Ozdek, A; Işik, E; Korkmaz, H
2012-01-01
Carotid blowout syndrome (CBS) is an emergency complication in patients undergoing treatment for head and neck cancers. The classical management of CBS is the ligation of the common carotid artery, because suturing is not be possible due to infection and necrosis of the field. In this case report, we present a patient with CBS, in whom we applied a self-expandable polytetrafluoroethylene (PTFE) stent and observed no morbidity. Endovascular stent is a life-saving technique with minimum morbidity that preserves blood flow to the brain. We believe that this method is preferable to ligation of the artery in CBS.
[Medpor plus titanic mesh implant in the repair of orbital blowout fractures].
Han, Xiao-hui; Zhang, Jia-yu; Cai, Jian-qiu; Shi, Ming-guang
2011-05-10
To study the efficacy of porous polyethylene (Medpor) plus titanic mesh sheets in the repair of orbital blowout fractures. A total of 20 patients underwent open surgical reduction with the combined usage of Medpor and titanic mesh. And they were followed up for average period of 14.5 months (range: 9 - 18). There is no infection or extrusion of medpor and titanic mesh in follow-up periods. There was no instance of decreased visual acuity at post-operation. And all cases of enophthalmos were corrected. The post-operative protrusion degree of both eyes was almost identical at less than 2 mm. The movement of eye balls was satisfactory in all directions. Diplopia disappeared in 18 cases with a cure rate of 90%, 1 case improved and 1 case persisted. Medpor plus titanic mesh implant is a safe and effective treatment in the repair of orbital blow out fractures.
Effects of systematic sampling on satellite estimates of deforestation rates
International Nuclear Information System (INIS)
Steininger, M K; Godoy, F; Harper, G
2009-01-01
Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the
Retrospect of the Ixtoc I blowout
Energy Technology Data Exchange (ETDEWEB)
Waldichuk, M
1980-07-01
The short-term consequences of the Ixtoc I oil spill are largely economic; and environmental effects will be minimized by rapid biodegradation in the mild climate of the Gulf of Mexico. Aside from the cost of the cleanup (the U.S. Coast Guard estimated its cost alone was $75,000/day), over 3 million bbl of oil was lost. One diver was killed. Mexico spent $132 million on capping the well and containing the environmental damage, and lost $87 million in oil revenues. Losses to the tourist industry and to commerical fishing, especially on the $100 million/yr Mexican shrimp fishery, are unknown but could be high. The shoreline effects of the oil slick drift; cleanup methods, and future Mexican plans to exploit its oil reserves are discussed.
Surgical timing of the orbital "blowout" fracture
DEFF Research Database (Denmark)
Damgaard, Olaf Ehlers; Larsen, Christian Grønhøj; Felding, Ulrik Ascanius
2016-01-01
). Patients were evaluated for diplopia and enophthalmos. Review Methods: We followed the statements of PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Pooled odds ratios were estimated with the fixed effects method of Mantel-Haenszel. Results: We identified 5 studies...... with available outcome data (N = 442). Patients in the late group showed an odds ratio of 3.3 (P =.027) for persistent postoperative diplopia as compared with the early group. We found no significant difference between the groups when assessing postoperative enophthalmos as an isolated symptom. Conclusion: We...... found a significantly increased risk of persistent diplopia in patients who were operated >14 days after the trauma....
Empirical estimation of astrophysical photodisintegration rates of 106Cd
Belyshev, S. S.; Kuznetsov, A. A.; Stopani, K. A.
2017-09-01
It has been noted in previous experiments that the ratio between the photoneutron and photoproton disintegration channels of 106Cd might be considerably different from predictions of statistical models. The thresholds of these reactions differ by several MeV and the total astrophysical rate of photodisintegration of 106Cd, which is mostly produced in photonuclear reactions during the p-process nucleosynthesis, might be noticeably different from the calculated value. In this work the bremsstrahlung beam of a 55.6 MeV microtron and the photon activation technique is used to measure yields of photonuclear reaction products on isotopically-enriched cadmium targets. The obtained results are compared with predictions of statistical models. The experimental yields are used to estimate photodisintegration reaction rates on 106Cd, which are then used in nuclear network calculations to examine the effects of uncertainties on the produced abundences of p-nuclei.
Development of dose rate estimation system for FBR maintenance
Energy Technology Data Exchange (ETDEWEB)
Iizawa, Katsuyuki [Japan Nuclear Cycle Development Inst., Tsuruga Head Office, International Cooperation and Technology Development Center, Tsuruga, Fukui (Japan); Takeuchi, Jun; Yoshikawa, Satoru [Hitachi Engineering Company, Ltd., Hitachi, Ibaraki (Japan); Urushihara, Hiroshi [Ibaraki Hitachi Information Service Co., Ltd., Omika, Ibaraki (Japan)
2001-09-01
During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)
Development of dose rate estimation system for FBR maintenance
International Nuclear Information System (INIS)
Iizawa, Katsuyuki; Takeuchi, Jun; Yoshikawa, Satoru; Urushihara, Hiroshi
2001-01-01
During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)
A quasi-independence model to estimate failure rates
International Nuclear Information System (INIS)
Colombo, A.G.
1988-01-01
The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)
Optimized support vector regression for drilling rate of penetration estimation
Bodaghi, Asadollah; Ansari, Hamid Reza; Gholami, Mahsa
2015-12-01
In the petroleum industry, drilling optimization involves the selection of operating conditions for achieving the desired depth with the minimum expenditure while requirements of personal safety, environment protection, adequate information of penetrated formations and productivity are fulfilled. Since drilling optimization is highly dependent on the rate of penetration (ROP), estimation of this parameter is of great importance during well planning. In this research, a novel approach called `optimized support vector regression' is employed for making a formulation between input variables and ROP. Algorithms used for optimizing the support vector regression are the genetic algorithm (GA) and the cuckoo search algorithm (CS). Optimization implementation improved the support vector regression performance by virtue of selecting proper values for its parameters. In order to evaluate the ability of optimization algorithms in enhancing SVR performance, their results were compared to the hybrid of pattern search and grid search (HPG) which is conventionally employed for optimizing SVR. The results demonstrated that the CS algorithm achieved further improvement on prediction accuracy of SVR compared to the GA and HPG as well. Moreover, the predictive model derived from back propagation neural network (BPNN), which is the traditional approach for estimating ROP, is selected for comparisons with CSSVR. The comparative results revealed the superiority of CSSVR. This study inferred that CSSVR is a viable option for precise estimation of ROP.
Estimation of human core temperature from sequential heart rate observations
International Nuclear Information System (INIS)
Buller, Mark J; Tharion, William J; Cheuvront, Samuel N; Montain, Scott J; Kenefick, Robert W; Castellani, John; Latzka, William A; Hoyt, Reed W; Roberts, Warren S; Richter, Mark; Jenkins, Odest Chadwicke
2013-01-01
Core temperature (CT) in combination with heart rate (HR) can be a good indicator of impending heat exhaustion for occupations involving exposure to heat, heavy workloads, and wearing protective clothing. However, continuously measuring CT in an ambulatory environment is difficult. To address this problem we developed a model to estimate the time course of CT using a series of HR measurements as a leading indicator using a Kalman filter. The model was trained using data from 17 volunteers engaged in a 24 h military field exercise (air temperatures 24–36 °C, and 42%–97% relative humidity and CTs ranging from 36.0–40.0 °C). Validation data from laboratory and field studies (N = 83) encompassing various combinations of temperature, hydration, clothing, and acclimation state were examined using the Bland–Altman limits of agreement (LoA) method. We found our model had an overall bias of −0.03 ± 0.32 °C and that 95% of all CT estimates fall within ±0.63 °C (>52 000 total observations). While the model for estimating CT is not a replacement for direct measurement of CT (literature comparisons of esophageal and rectal methods average LoAs of ±0.58 °C) our results suggest it is accurate enough to provide practical indication of thermal work strain for use in the work place. (paper)
Estimation of human core temperature from sequential heart rate observations.
Buller, Mark J; Tharion, William J; Cheuvront, Samuel N; Montain, Scott J; Kenefick, Robert W; Castellani, John; Latzka, William A; Roberts, Warren S; Richter, Mark; Jenkins, Odest Chadwicke; Hoyt, Reed W
2013-07-01
Core temperature (CT) in combination with heart rate (HR) can be a good indicator of impending heat exhaustion for occupations involving exposure to heat, heavy workloads, and wearing protective clothing. However, continuously measuring CT in an ambulatory environment is difficult. To address this problem we developed a model to estimate the time course of CT using a series of HR measurements as a leading indicator using a Kalman filter. The model was trained using data from 17 volunteers engaged in a 24 h military field exercise (air temperatures 24-36 °C, and 42%-97% relative humidity and CTs ranging from 36.0-40.0 °C). Validation data from laboratory and field studies (N = 83) encompassing various combinations of temperature, hydration, clothing, and acclimation state were examined using the Bland-Altman limits of agreement (LoA) method. We found our model had an overall bias of -0.03 ± 0.32 °C and that 95% of all CT estimates fall within ±0.63 °C (>52 000 total observations). While the model for estimating CT is not a replacement for direct measurement of CT (literature comparisons of esophageal and rectal methods average LoAs of ±0.58 °C) our results suggest it is accurate enough to provide practical indication of thermal work strain for use in the work place.
Increasing fMRI sampling rate improves Granger causality estimates.
Directory of Open Access Journals (Sweden)
Fa-Hsuan Lin
Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.
Commercial Discount Rate Estimation for Efficiency Standards Analysis
Energy Technology Data Exchange (ETDEWEB)
Fujita, K. Sydny [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2016-04-13
Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at the national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).
Automatic estimation of pressure-dependent rate coefficients
Allen, Joshua W.; Goldsmith, C. Franklin; Green, William H.
2012-01-01
A general framework is presented for accurately and efficiently estimating the phenomenological pressure-dependent rate coefficients for reaction networks of arbitrary size and complexity using only high-pressure-limit information. Two aspects of this framework are discussed in detail. First, two methods of estimating the density of states of the species in the network are presented, including a new method based on characteristic functional group frequencies. Second, three methods of simplifying the full master equation model of the network to a single set of phenomenological rates are discussed, including a new method based on the reservoir state and pseudo-steady state approximations. Both sets of methods are evaluated in the context of the chemically-activated reaction of acetyl with oxygen. All three simplifications of the master equation are usually accurate, but each fails in certain situations, which are discussed. The new methods usually provide good accuracy at a computational cost appropriate for automated reaction mechanism generation. This journal is © the Owner Societies.
A Slow Streamer Blowout at the Sun and Ulysses
Seuss, S. T.; Bemporad, A.; Poletto, G.
2004-01-01
On 10 June 2000 a streamer on the southeast limb slowly disappeared from LASCO/C2 over approximately 10 hours. A small CME was reported in C2. A substantial interplanetary CME (ICME) was later detected at Ulysses, which was at quadrature with the Sun and SOHO at the time. This detection illustrates the properties of an ICME for a known solar source and demonstrates that the identification can be done even beyond 3 AU. Slow streamer blowouts such as this have long been known but are little studied. We report on the SOHO observation of a coronal mass ejection (CME) on the solar limb and the subsequent in situ detection at Ulysses, which was near quadrature at the time, above the location of the CME. SOHO-Ulysses quadrature was 13 June, when Ulysses was 3.36 AU from the Sun and 58.2 degrees south of the equator off the east limb. The slow streamer blowout was on 10 June, when the SOHO-Sun-Ulysses angle was 87 degrees.
Analysis of blowout fractures using cine mode MRI
International Nuclear Information System (INIS)
Kawahara, Masaaki; Shiihara, Kumiko; Kimura, Hisashi; Fukai, Sakuko; Tabuchi, Akio; Kojo, Tuyoshi
1995-01-01
By observing conventional CT and MRI images, it is difficult to distinguish extension failure from adhesion, bone fracture or damage to the extraocular muscle, any one of which may be the direct cause of the eye movement disturbance accompanying blowout fracture. We therefore carried out dynamic analysis of eye movement disturbance using a cine mode MRI. We put seven fixation points in the gantry of the MRI and filmed eye movement disturbances by the gradient echo method, using a surface coil and holding the vision on each fixation point. We also video recorded the CRT monitor of the MRI to obtain dynamic MRI images. The subjects comprised 5 cases (7-23 years old). In 4 cases, we started orthoptic treatment, saccadic eye movement training, convergence training and fusional amplitude training after surgery, with only orthoptic treatment in the 5 th case. In all cases, fusion area improvement was recognized during training. In 2 cases examined by cine mode MRI before and after surgery, we observed improved eye movement after training, the effectiveness of which was thereby proven. Also, using cine mode MRI we were able to determine the character of incarcerated tissue and the cause of eye movement disturbance. We conclude that it blowout fracture, cine mode MRI may be useful in selecting treatment and observing its effectiveness. (author)
Estimation of adjusted rate differences using additive negative binomial regression.
Donoghoe, Mark W; Marschner, Ian C
2016-08-15
Rate differences are an important effect measure in biostatistics and provide an alternative perspective to rate ratios. When the data are event counts observed during an exposure period, adjusted rate differences may be estimated using an identity-link Poisson generalised linear model, also known as additive Poisson regression. A problem with this approach is that the assumption of equality of mean and variance rarely holds in real data, which often show overdispersion. An additive negative binomial model is the natural alternative to account for this; however, standard model-fitting methods are often unable to cope with the constrained parameter space arising from the non-negativity restrictions of the additive model. In this paper, we propose a novel solution to this problem using a variant of the expectation-conditional maximisation-either algorithm. Our method provides a reliable way to fit an additive negative binomial regression model and also permits flexible generalisations using semi-parametric regression functions. We illustrate the method using a placebo-controlled clinical trial of fenofibrate treatment in patients with type II diabetes, where the outcome is the number of laser therapy courses administered to treat diabetic retinopathy. An R package is available that implements the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimating implied rates of discount in healthcare decision-making.
West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J
2003-01-01
To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for
Estimating glomerular filtration rate in a population-based study
Directory of Open Access Journals (Sweden)
Anoop Shankar
2010-07-01
Full Text Available Anoop Shankar1, Kristine E Lee2, Barbara EK Klein2, Paul Muntner3, Peter C Brazy4, Karen J Cruickshanks2,5, F Javier Nieto5, Lorraine G Danforth2, Carla R Schubert2,5, Michael Y Tsai6, Ronald Klein21Department of Community Medicine, West Virginia University School of Medicine, Morgantown, WV, USA; 2Department of Ophthalmology and Visual Sciences, 4Department of Medicine, 5Department of Population Health Sciences, University of Wisconsin, School of Medicine and Public Health, Madison, WI, USA; 3Department of Community Medicine, Mount Sinai School of Medicine, NY, USA; 6Department of Laboratory Medicine and Pathology, University of Minnesota, Minneapolis, MN, USABackground: Glomerular filtration rate (GFR-estimating equations are used to determine the prevalence of chronic kidney disease (CKD in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample.Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women. We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD equation, Cockcroft–Gault (CG equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L.Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1
SEE rate estimation based on diffusion approximation of charge collection
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Estimation of mortality rates in stage-structured population
Wood, Simon N
1991-01-01
The stated aims of the Lecture Notes in Biomathematics allow for work that is "unfinished or tentative". This volume is offered in that spirit. The problem addressed is one of the classics of statistical ecology, the estimation of mortality rates from stage-frequency data, but in tackling it we found ourselves making use of ideas and techniques very different from those we expected to use, and in which we had no previous experience. Specifically we drifted towards consideration of some rather specific curve and surface fitting and smoothing techniques. We think we have made some progress (otherwise why publish?), but are acutely aware of the conceptual and statistical clumsiness of parts of the work. Readers with sufficient expertise to be offended should regard the monograph as a challenge to do better. The central theme in this book is a somewhat complex algorithm for mortality estimation (detailed at the end of Chapter 4). Because of its complexity, the job of implementing the method is intimidating. Any r...
Gambling disorder: estimated prevalence rates and risk factors in Macao.
Wu, Anise M S; Lai, Mark H C; Tong, Kwok-Kit
2014-12-01
An excessive, problematic gambling pattern has been regarded as a mental disorder in the Diagnostic and Statistical Manual for Mental Disorders (DSM) for more than 3 decades (American Psychiatric Association [APA], 1980). In this study, its latest prevalence in Macao (one of very few cities with legalized gambling in China and the Far East) was estimated with 2 major changes in the diagnostic criteria, suggested by the 5th edition of DSM (APA, 2013): (a) removing the "Illegal Act" criterion, and (b) lowering the threshold for diagnosis. A random, representative sample of 1,018 Macao residents was surveyed with a phone poll design in January 2013. After the 2 changes were adopted, the present study showed that the estimated prevalence rate of gambling disorder was 2.1% of the Macao adult population. Moreover, the present findings also provided empirical support to the application of these 2 recommended changes when assessing symptoms of gambling disorder among Chinese community adults. Personal risk factors of gambling disorder, namely being male, having low education, a preference for casino gambling, as well as high materialism, were identified.
Estimation of respiratory rate from thermal videos of preterm infants.
Pereira, Carina Barbosa; Heimann, Konrad; Venema, Boudewijn; Blazek, Vladimir; Czaplik, Michael; Leonhardt, Steffen
2017-07-01
Studies have demonstrated that respiratory rate (RR) is a good predictor of the patient condition as well as an early marker of patient deterioration and physiological distress. However, it is also referred as "the neglected vital parameter". This is mainly due to shortcoming of current monitoring techniques. Moreover, in preterm infants, the removal of adhesive electrodes cause epidermal stripping, skin disruption, and with it pain. This paper proposes a new algorithm for estimation of RR in thermal videos of moderate preterm infants. It uses the temperature modulation around the nostrils over the respiratory cycle to extract this vital parameter. To compensate movement artifacts the approach incorporates a tracking algorithm. In addition, a new reliable and accurate algorithm for robust estimation of local (breath-to-breath) intervals was included. To evaluate the performance of this approach, thermal recordings of four moderate preterm infants were acquired. Results were compared with RR derived from body surface electrocardiography. The results showed an excellent agreement between thermal imaging and gold standard. On average, the relative error between both monitoring techniques was 3.42%. In summary, infrared thermography may be a clinically relevant alternative to conventional sensors, due to its high thermal resolution and outstanding characteristics.
Bioavailability of contaminants estimated from uptake rates into soil invertebrates
International Nuclear Information System (INIS)
Straalen, N.M. van; Donker, M.H.; Vijver, M.G.; Gestel, C.A.M. van
2005-01-01
It is often argued that the concentration of a pollutant inside an organism is a good indicator of its bioavailability, however, we show that the rate of uptake, not the concentration itself, is the superior predictor. In a study on zinc accumulation and toxicity to isopods (Porcellio scaber) the dietary EC 50 for the effect on body growth was rather constant and reproducible, while the internal EC 50 varied depending on the accumulation history of the animals. From the data a critical value for zinc accumulation in P. scaber was estimated as 53 μg/g/wk. We review toxicokinetic models applicable to time-series measurements of concentrations in invertebrates. The initial slope of the uptake curve is proposed as an indicator of bioavailability. To apply the dynamic concept of bioavailability in risk assessment, a set of representative organisms should be chosen and standardized protocols developed for exposure assays by which suspect soils can be evaluated. - Sublethal toxicity of zinc to isopods suggests that bioavailability of soil contaminants is best measured by uptake rates, not by body burdens
Clinical features and MRI findings of blow-out fracture
International Nuclear Information System (INIS)
Yamanouchi, Yasuo; Yasuda, Takasumi; Kawamoto, Keiji; Inagaki, Takayuki; Someda, Kuniyuki.
1996-01-01
Precise anatomical understanding of orbital blow-out fracture lesions is necessary for the treatment of patients. Retrospectively, MRI findings were compared with the clinical features of pure type blow-out fractures and the efficacy of MRI in influencing a decision for surgical intervention was evaluated. Eighteen child (15 boys, 3 girls) cases were evaluated and compared with adult cases. The patients were classified into three categories (Fig.1) and two types (Fig.2) in accordance with the degree of protrusion of fat tissue. The degree of muscle protrusion also was divided into three categories (Fig. 3). Both muscle and fat tissue were protruding from the fracture site in 14 cases. Fat tissue protrusion alone was found in 3 cases. In contrast, no protrusion was seen in one case. The incarcerated type of fat prolapse was found in 40% of cases, while muscle tissue prolapse was found in 75% of patients. Marginal irregularity or swelling of muscle was observed in 11 patients. There was good correlation of ocular motor disturbance and MRI findings. Disturbance of eyeball movement was observed in all patients with either incarcerated fat tissue or marginal irregularity or swelling of muscle. In contrast, restriction of eyeball movement was rare in cases of no incarceration, even if the fracture was wide. Deformity or marginal irregularity of the ocular muscle demonstrated in MRI may suggest damage an adhesion to the muscle wall. When MRI reveals incarceration or severe prolapse of fat tissue, or deformity and marginal irregularity of the ocular muscle, surgical intervention should be considered. (author)
Clinical features and MRI findings of blow-out fracture
Energy Technology Data Exchange (ETDEWEB)
Yamanouchi, Yasuo; Yasuda, Takasumi; Kawamoto, Keiji [Kansai Medical Univ., Moriguchi, Osaka (Japan); Inagaki, Takayuki; Someda, Kuniyuki
1996-06-01
Precise anatomical understanding of orbital blow-out fracture lesions is necessary for the treatment of patients. Retrospectively, MRI findings were compared with the clinical features of pure type blow-out fractures and the efficacy of MRI in influencing a decision for surgical intervention was evaluated. Eighteen child (15 boys, 3 girls) cases were evaluated and compared with adult cases. The patients were classified into three categories (Fig.1) and two types (Fig.2) in accordance with the degree of protrusion of fat tissue. The degree of muscle protrusion also was divided into three categories (Fig. 3). Both muscle and fat tissue were protruding from the fracture site in 14 cases. Fat tissue protrusion alone was found in 3 cases. In contrast, no protrusion was seen in one case. The incarcerated type of fat prolapse was found in 40% of cases, while muscle tissue prolapse was found in 75% of patients. Marginal irregularity or swelling of muscle was observed in 11 patients. There was good correlation of ocular motor disturbance and MRI findings. Disturbance of eyeball movement was observed in all patients with either incarcerated fat tissue or marginal irregularity or swelling of muscle. In contrast, restriction of eyeball movement was rare in cases of no incarceration, even if the fracture was wide. Deformity or marginal irregularity of the ocular muscle demonstrated in MRI may suggest damage an adhesion to the muscle wall. When MRI reveals incarceration or severe prolapse of fat tissue, or deformity and marginal irregularity of the ocular muscle, surgical intervention should be considered. (author)
Diagnosis of magnetic resonance imaging (MRI) for blowout fracture. Three advantages of MRI
International Nuclear Information System (INIS)
Nishida, Yasuhiro; Aoki, Yoshiko; Hayashi, Osamu; Kimura, Makiko; Murata, Toyotaka; Ishida, Youichi; Iwami, Tatsuya; Kani, Kazutaka
1999-01-01
Magnetic resonance imaging (MRI) gives a much more detailed picture of the soft tissue than computerized tomography (CT). In blowout fracture cases, it is very easy to observe the incarcerated orbital tissue. We performed MRI in 19 blowout fracture cases. After evaluating the images, we found three advantages of MRI. The first is that even small herniation of the orbital contents can easily be detected because the orbital fatty tissue contrasts well around the other tissues in MRI. The second is that the incarcerated tissues can be clearly differentiated because a clear contrast between the orbital fatty tissue and the extraocular muscle can be seen in MRI. The third is that the running images of the incarcerated muscle belly can be observed because any necessary directional slies can be taken in MRI. These advantages are very important in the diagnosis of blowout fractures. MRI should be employed in blowout fracture cases in addition to CT. (author)
Estimating progression rates for human papillomavirus infection from epidemiological data.
Jit, Mark; Gay, Nigel; Soldan, Kate; Hong Choi, Yoon; Edmunds, William John
2010-01-01
A Markov model was constructed in order to estimate type-specific rates of cervical lesion progression and regression in women with high-risk human papillomavirus (HPV). The model was fitted to age- and type-specific data regarding the HPV DNA and cytological status of women undergoing cervical screening in a recent screening trial, as well as cervical cancer incidence. It incorporates different assumptions about the way lesions regress, the accuracy of cytological screening, the specificity of HPV DNA testing, and the age-specific prevalence of HPV infection. Combinations of assumptions generate 162 scenarios for squamous cell carcinomas and 54 scenarios for adenocarcinomas. Simulating an unscreened cohort of women infected with high-risk HPV indicates that the probability of an infection continuing to persist and to develop into invasive cancer depends on the length of time it has already persisted. The scenarios and parameter sets that produce the best fit to available epidemiological data provide a basis for modeling the natural history of HPV infection and disease.
Estimated Glomerular Filtration Rate; Laboratory Implementation and Current Global Status.
Miller, W Greg; Jones, Graham R D
2018-01-01
In 2002, the Kidney Disease Outcomes Quality Initiative guidelines for identifying and treating CKD recommended that clinical laboratories report estimated glomerular filtration rate (eGFR) with every creatinine result to assist clinical practitioners to identify people with early-stage CKD. At that time, the original Modification of Diet in Renal Disease (MDRD) Study equation based on serum creatinine measurements was recommended for calculating eGFR. Because the MDRD Study equation was developed using a nonstandardized creatinine method, a Laboratory Working Group of the National Kidney Disease Education program was formed and implemented standardized calibration traceability for all creatinine methods from global manufacturers by approximately 2010. A modified MDRD Study equation for use with standardized creatinine was developed. The Chronic Kidney Disease Epidemiology Collaboration developed a new equation in 2009 that was more accurate than the MDRD Study equation at values above 60 mL/min/1.73 m 2 . As of 2017, reporting eGFR with creatinine is almost universal in many countries. A reference system for cystatin C became available in 2010, and manufacturers are in the process to standardize cystatin C assays. Equations for eGFR based on standardized cystatin C alone and with creatinine are now available from the Chronic Kidney Disease Epidemiology Collaboration and other groups. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Estimating the effects of Exchange and Interest Rates on Stock ...
African Journals Online (AJOL)
The monthly closing returns of All-share index, exchange rates and interest rates ... The interest rate also showed a negative relationship but insignificant at the ... is a prerequisite for attracting investments especially foreign direct investment.
Standardizing estimates of the Plasmodium falciparum parasite rate
Directory of Open Access Journals (Sweden)
Smith David L
2007-09-01
Full Text Available Abstract Background The Plasmodium falciparum parasite rate (PfPR is a commonly reported index of malaria transmission intensity. PfPR rises after birth to a plateau before declining in older children and adults. Studies of populations with different age ranges generally report average PfPR, so age is an important source of heterogeneity in reported PfPR data. This confounds simple comparisons of PfPR surveys conducted at different times or places. Methods Several algorithms for standardizing PfPR were developed using 21 studies that stratify in detail PfPR by age. An additional 121 studies were found that recorded PfPR from the same population over at least two different age ranges; these paired estimates were used to evaluate these algorithms. The best algorithm was judged to be the one that described most of the variance when converting the PfPR pairs from one age-range to another. Results The analysis suggests that the relationship between PfPR and age is predictable across the observed range of malaria endemicity. PfPR reaches a peak after about two years and remains fairly constant in older children until age ten before declining throughout adolescence and adulthood. The PfPR pairs were poorly correlated; using one to predict the other would explain only 5% of the total variance. By contrast, the PfPR predicted by the best algorithm explained 72% of the variance. Conclusion The PfPR in older children is useful for standardization because it has good biological, epidemiological and statistical properties. It is also historically consistent with the classical categories of hypoendemic, mesoendemic and hyperendemic malaria. This algorithm provides a reliable method for standardizing PfPR for the purposes of comparing studies and mapping malaria endemicity. The scripts for doing so are freely available to all.
Li, Qi; Shi, Hui; Yang, Duoxing; Wei, Xiaochen
2017-02-01
Carbon dioxide (CO 2 ) blowout from a wellbore is regarded as a potential environment risk of a CO 2 capture and storage (CCS) project. In this paper, an assumed blowout of a wellbore was examined for China's Shenhua CCS demonstration project. The significant factors that influenced the diffusion of CO 2 were identified by using a response surface method with the Box-Behnken experiment design. The numerical simulations showed that the mass emission rate of CO 2 from the source and the ambient wind speed have significant influence on the area of interest (the area of high CO 2 concentration above 30,000 ppm). There is a strong positive correlation between the mass emission rate and the area of interest, but there is a strong negative correlation between the ambient wind speed and the area of interest. Several other variables have very little influence on the area of interest, e.g., the temperature of CO 2 , ambient temperature, relative humidity, and stability class values. Due to the weather conditions at the Shenhua CCS demonstration site at the time of the modeled CO 2 blowout, the largest diffusion distance of CO 2 in the downwind direction did not exceed 200 m along the centerline. When the ambient wind speed is in the range of 0.1-2.0 m/s and the mass emission rate is in the range of 60-120 kg/s, the range of the diffusion of CO 2 is at the most dangerous level (i.e., almost all Grade Four marks in the risk matrix). Therefore, if the injection of CO 2 takes place in a region that has relatively low perennial wind speed, special attention should be paid to the formulation of pre-planned, emergency measures in case there is a leakage accident. The proposed risk matrix that classifies and grades blowout risks can be used as a reference for the development of appropriate regulations. This work may offer some indicators in developing risk profiles and emergency responses for CO 2 blowouts.
Validation of estimated glomerular filtration rate equations for Japanese children.
Gotoh, Yoshimitsu; Uemura, Osamu; Ishikura, Kenji; Sakai, Tomoyuki; Hamasaki, Yuko; Araki, Yoshinori; Hamda, Riku; Honda, Masataka
2018-01-25
The gold standard for evaluation of kidney function is renal inulin clearance (Cin). However, the methodology for Cin is complicated and difficult, especially for younger children and/or patients with bladder dysfunction. Therefore, we developed a simple and easier method for obtaining the estimated glomerular filtration rate (eGFR) using equations and values for several biomarkers, i.e., serum creatinine (Cr), serum cystatin C (cystC), serum beta-2 microglobulin (β 2 MG), and creatinine clearance (Ccr). The purpose of the present study was to validate these equations with a new data set. To validate each equation, we used data of 140 patients with CKD with clinical need for Cin, using the measured GFR (mGFR). We compared the results for each eGFR equation with the mGFR using mean error (ME), root mean square error (RMSE), P 30 , and Bland-Altman analysis. The ME of Cr, cystC, β 2 MG, and Ccr based on eGFR was 15.8 ± 13.0, 17.2 ± 16.5, 15.4 ± 14.3, and 10.6 ± 13.0 ml/min/1.73 m 2 , respectively. The RMSE was 29.5, 23.8, 20.9, and 16.7, respectively. The P 30 was 79.4, 71.1, 69.5, and 92.9%, respectively. The Bland-Altman bias analysis showed values of 4.0 ± 18.6, 5.3 ± 16.8, 12.7 ± 17.0, and 2.5 ± 17.2 ml/min/1.73 m 2 , respectively, for these parameters. The bias of each eGFR equation was not large. Therefore, each eGFR equation could be used.
Tissue strain rate estimator using ultrafast IQ complex data
TERNIFI , Redouane; Elkateb Hachemi , Melouka; Remenieras , Jean-Pierre
2012-01-01
International audience; Pulsatile motion of brain parenchyma results from cardiac and breathing cycles. In this study, transient motion of brain tissue was estimated using an Aixplorer® imaging system allowing an ultrafast 2D acquisition mode. The strain was computed directly from the ultrafast IQ complex data using the extended autocorrelation strain estimator (EASE), which provides great SNRs regardless of depth. The EASE first evaluates the autocorrelation function at each depth over a set...
Redefinition and global estimation of basal ecosystem respiration rate
DEFF Research Database (Denmark)
Yuan, Wenping; Luo, Yiqi; Li, Xianglan
2011-01-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models sti...
Simplification of an MCNP model designed for dose rate estimation
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Simplification of an MCNP model designed for dose rate estimation
Directory of Open Access Journals (Sweden)
Laptev Alexander
2017-01-01
Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Redefinition and global estimation of basal ecosystem respiration rate
Energy Technology Data Exchange (ETDEWEB)
Yuan, Wenping [College of Global Change and Earth System Science, Beijing Normal University, Beijing, China; Luo, Yiqi [Department of Botany and Microbiology, University of Oklahoma, Norman, Oklahoma, USA; Li, Xianglan [College of Global Change and Earth System Science, Beijing Normal University, Beijing, China; Liu, Shuguang; Yu, Guirui [Key Laboratory of Ecosystem Network Observation and Modeling, Synthesis Research Center of Chinese Ecosystem Research Network, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, China; Zhou, Tao [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing, China; Bahn, Michael [Institute of Ecology, University of Innsbruck, Innsbruck, Austria; Black, Andy [Faculty of Land and Food Systems, University of British Columbia, Vancouver, B. C., Canada; Desai, Ankur R. [Atmospheric and Oceanic Sciences Department, Center for Climatic Research, Nelson Institute for Environmental Studies, University of Wisconsin-Madison, Madison, Wisconsin, USA; Cescatti, Alessandro [Institute for Environment and Sustainability, Joint Research Centre, European Commission, Ispra, Italy; Marcolla, Barbara [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Jacobs, Cor [Alterra, Earth System Science-Climate Change, Wageningen University, Wageningen, Netherlands; Chen, Jiquan [Department of Earth, Ecological, and Environmental Sciences, University of Toledo, Toledo, Ohio, USA; Aurela, Mika [Climate and Global Change Research, Finnish Meteorological Institute, Helsinki, Finland; Bernhofer, Christian [Chair of Meteorology, Institute of Hydrology and Meteorology, Technische Universität Dresden, Dresden, Germany; Gielen, Bert [Department of Biology, University of Antwerp, Wilrijk, Belgium; Bohrer, Gil [Department of Civil, Environmental, and Geodetic Engineering, Ohio State University, Columbus, Ohio, USA; Cook, David R. [Climate Research Section, Environmental Science Division, Argonne National Laboratory, Argonne, Illinois, USA; Dragoni, Danilo [Department of Geography, Indiana University, Bloomington, Indiana, USA; Dunn, Allison L. [Department of Physical and Earth Sciences, Worcester State College, Worcester, Massachusetts, USA; Gianelle, Damiano [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Grünwald, Thomas [Chair of Meteorology, Institute of Hydrology and Meteorology, Technische Universität Dresden, Dresden, Germany; Ibrom, Andreas [Risø DTU National Laboratory for Sustainable Energy, Biosystems Division, Technical University of Denmark, Roskilde, Denmark; Leclerc, Monique Y. [Department of Crop and Soil Sciences, College of Agricultural and Environmental Sciences, University of Georgia, Griffin, Georgia, USA; Lindroth, Anders [Geobiosphere Science Centre, Physical Geography and Ecosystems Analysis, Lund University, Lund, Sweden; Liu, Heping [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman, Washington, USA; Marchesini, Luca Belelli [Department for Innovation in Biological, Agro-Food and Forest Systems, University of Tuscia, Viterbo, Italy; Montagnani, Leonardo; Pita, Gabriel [Department of Mechanical Engineering, Instituto Superior Técnico, Lisbon, Portugal; Rodeghiero, Mirco [Sustainable Agro-ecosystems and Bioresources Department, Fondazione Edmund Mach-IASMA Research and Innovation Centre, San Michele all' Adige, Italy; Rodrigues, Abel [Unidade de Silvicultura e Produtos Florestais, Instituto Nacional dos Recursos Biológicos, Oeiras, Portugal; Starr, Gregory [Department of Biological Sciences, University of Alabama, Tuscaloosa, Alabama, USA; Stoy, Paul C. [Department of Land Resources and Environmental Sciences, Montana State University, Bozeman, Montana, USA
2011-10-13
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ~3°S to ~70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual
Contemporary management of carotid blowout syndrome utilizing endovascular techniques.
Manzoor, Nauman F; Rezaee, Rod P; Ray, Abhishek; Wick, Cameron C; Blackham, Kristine; Stepnick, David; Lavertu, Pierre; Zender, Chad A
2017-02-01
To illustrate complex interdisciplinary decision making and the utility of modern endovascular techniques in the management of patients with carotid blowout syndrome (CBS). Retrospective chart review. Patients treated with endovascular strategies and/or surgical modalities were included. Control of hemorrhage, neurological, and survival outcomes were studied. Between 2004 and 2014, 33 patients had 38 hemorrhagic events related to head and neck cancer that were managed with endovascular means. Of these, 23 were localized to the external carotid artery (ECA) branches and five localized to the ECA main trunk; nine were related to the common carotid artery (CCA) or internal carotid artery (ICA), and one event was related to the innominate artery. Seven events related to the CCA/ICA or innominate artery were managed with endovascular sacrifice, whereas three cases were managed with a flow-preserving approach (covered stent). Only one patient developed permanent hemiparesis. In two of the three cases where the flow-preserving approach was used, the covered stent eventually became exposed via the overlying soft tissue defect, and definitive management using carotid revascularization or resection was employed to prevent further hemorrhage. In cases of soft tissue necrosis, vascularized tissues were used to cover the great vessels as applicable. The use of modern endovascular approaches for management of acute CBS yields optimal results and should be employed in a coordinated manner by the head and neck surgeon and the neurointerventionalist. 4. Laryngoscope, 2016 127:383-390, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Estimation of Cessation Rates among Danish Users of Benzodiazepines
DEFF Research Database (Denmark)
Støvring, Henrik; Gasse, Christiane
Background: Widespread and longterm use of benzodiazepines constitute a public health problem. Health care authorities hence advice that use should not exceed three months, in particular for the elderly and patients with a past diagnosis of drug addiction. Objectives: Estimate the shape...
Bayesian nonparametric estimation of hazard rate in monotone Aalen model
Czech Academy of Sciences Publication Activity Database
Timková, Jana
2014-01-01
Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf
Using genetic data to estimate diffusion rates in heterogeneous landscapes.
Roques, L; Walker, E; Franck, P; Soubeyrand, S; Klein, E K
2016-08-01
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
Combining heart rate and accelerometer data to estimate physical fitness
Tönis, Thijs; Vollenbroek-Hutten, Miriam Marie Rosé; Hermens, Hermanus J.
2012-01-01
Monitoring changes in physical fitness is relevant in many conditions and groups of patients, but its estimation demands substantial effort from the person, personnel and equipment. Besides that, present (sub) maximal exercise tests give a momentary fitness score, which depends on many (external)
International Nuclear Information System (INIS)
Chatterjee, Bishu; Sharp, Peter A.
2006-01-01
Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)
Estimation of Corporate Credit Rating Quality in Emerging Markets
African Journals Online (AJOL)
information that is not already readily available to investors. In order to ... is institutionalisation: that is, the increased use of credit ratings in financial ..... three financial ratios are included in the model to account for within-firm default risks.
Estimating the equilibrium real exchange rate in Venezuela
Hilde Bjørnland
2004-01-01
To determine whether the real exchange rate is misaligned with respect to its long-run equilibrium is an important issue for policy makers. This paper clarifies and calculates the concept of the equilibrium real exchange rate, using a structural vector autoregression (VAR) model. By imposing long-run restrictions on a VAR model for Venezuela, four structural shocks are identified: Nominal demand, real demand, supply and oil price shocks. The identified shocks and their impulse responses are c...
Estimating the market premium in short term interest rates
Hansen, Hans Fredrik
2006-01-01
Looking at the term structure in the interest rate market one can’t help notice the evident market premium above the central banks target rate. What factors might decide this premium? By using different variations of simple regression models we see that the model is constantly lagging the real time series. Acknowledging the fact that market clearings often are subject to several equations; we’re better able to develop a sensible model using a simultaneous equilibrium model. The multiple equat...
Capture-recapture analysis for estimating manatee reproductive rates
Kendall, W.L.; Langtimm, C.A.; Beck, C.A.; Runge, M.C.
2004-01-01
Modeling the life history of the endangered Florida manatee (Trichechus manatus latirostris) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state mark-resighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982?2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.
Estimating spontaneous mutation rates at enzyme loci in Drosophila melanogaster
International Nuclear Information System (INIS)
Mukai, Terumi; Yamazaki, Tsuneyuki; Harada, Ko; Kusakabe, Shin-ichi
1990-04-01
Spontaneous mutations were accumulated for 1,620,826 allele-generations on chromosomes that originated from six stem second chromosomes of Drosophila melanogaster. Only null-electromorph mutations were detected. Band-electromorph mutations were not found. The average rate of null-electromorph mutations was 2.71 x 10 -5 per locus per generation. The 95% confidence interval (μ n ) was 1.97 x 10 -5 n -5 per locus per generation. The upper 95% confidence limit of the band-electromorph mutation rate (μ B ) was 2.28 x 10 -6 per locus per generation. It appeared that null mutations were induced by movable genetic elements and that the mutation rates were different from chromosome to chromosome. (author)
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Directory of Open Access Journals (Sweden)
Rongda Chen
Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Chen, Rongda; Wang, Ze
2013-01-01
Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558
Estimated migration rates under scenarios of global climate change.
Jay R. Malcolm; Adam Markham; Ronald P. Neilson; Michael. Oaraci
2002-01-01
Greefihouse-induced warming and resulting shifts in climatic zones may exceed the migration capabilities of some species. We used fourteen combinations of General Circulation Models (GCMs) and Global Vegetation Models (GVMs) to investigate possible migration rates required under CO2 doubled climatic forcing.
Drivers and annual estimates of marine wildlife entanglement rates
McIntosh, R.R.; Kirkwood, Roger; Sutherland, D.R.; Dann, Peter
2015-01-01
Methods of calculating wildlife entanglement rates are not standardised between studies and often ignore the influence of observer effort, confounding comparisons. From 1997-2013 we identified 359 entangled Australian fur seals at Seal Rocks, south-eastern Australia. Most entanglement materials
Estimating Maternal Mortality Rate Using Sisterhood Methods in ...
African Journals Online (AJOL)
... maternal and child morbidity and mortality, which could serve as a surveillance strategy to identify the magnitude of the problem and to mobilize resources to areas where the problems are most prominent for adequate control. KEY WORDS: Maternal Mortality Rate, Sisterhood Method. Highland Medical Research Journal ...
Reconciling Estimates of Earnings Processes in Growth Rates and Levels
DEFF Research Database (Denmark)
Daly, Moira; Hryshko, Dmytro; Manovskii, Iourii
The stochastic process for earnings is the key element of incomplete markets models in modern quantitative macroeconomics. It determines both the equilibrium distributions of endogenous outcomes and the design of optimal policies. Yet, there is no consensus in the literature on the relative...... magnitudes of the permanent and transitory innovations in earnings. When estimation is based on the earnings moments in levels, the variance of transitory shocks is found to be relatively high. When the moments in differences are used, the variance of the permanent component is relatively high instead. We...
Jungerius, P.D.; Witter, J.V.; van Boxel, J.H.
1991-01-01
Blowouts are the main features of aeolian activity in many dune areas. To assess the impact of future climatic change on the geomorphological processes prevailing in a dune landscape it is essential to understand blowout formation and identify the meteorological parameters which are important. The
Accuracy Rates of Ancestry Estimation by Forensic Anthropologists Using Identified Forensic Cases.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2017-07-01
A common task in forensic anthropology involves the estimation of the ancestry of a decedent by comparing their skeletal morphology and measurements to skeletons of individuals from known geographic groups. However, the accuracy rates of ancestry estimation methods in actual forensic casework have rarely been studied. This article uses 99 forensic cases with identified skeletal remains to develop accuracy rates for ancestry estimations conducted by forensic anthropologists. The overall rate of correct ancestry estimation from these cases is 90.9%, which is comparable to most research-derived rates and those reported by individual practitioners. Statistical tests showed no significant difference in accuracy rates depending on examiner education level or on the estimated or identified ancestry. More recent cases showed a significantly higher accuracy rate. The incorporation of metric analyses into the ancestry estimate in these cases led to a higher accuracy rate. © 2017 American Academy of Forensic Sciences.
Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.
Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen
2012-01-01
An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Ekpenyong, Christopher E; Daniel, Nyebuk E; Antai, Atim B
2015-01-01
The existing research findings regarding the effects of lemongrass (Cymbopogon citratus) tea on renal function indices are conflicting and inconclusive. In the present study, we investigated the effects of infusions prepared from C citratus leaves on creatinine clearance rate (CCr) and estimated glomerular filtration rate (eGFR) in humans. One hundred five subjects (55 men and 50 women) aged 18 to 35 years were randomly assigned to groups set to orally receive infusions prepared from 2, 4, or 8 g of C citratus leaf powder once daily, for 30 days. Serum and urinary levels of urea, creatinine, pH, specific gravity, uric acid, electrolytes, diuretic indices, and eGFR were assessed at days 0, 10, and 30 after the initiation of treatment. Results obtained on days10 and 30 were compared with baseline values. CCr and eGFR decreased significantly at day 30 in both male and female subjects in all the groups and in females treated with infusion prepared from 8 g of C citratus leaf powder for 10 days. At day 10, CCr and eGFR were unchanged in those treated with infusions prepared from 2 or 4 g of the leaf powder, whereas diuretic indices (urine volume, urination frequency, diuretic action, and saliuretic indices) increased above the baseline levels. Serum and urinary creatinine levels significantly increased (P < .05) in both male and female subjects in all the groups. Serum urea significantly increased in the groups treated with infusions prepared from 4 or 8 g of the leaf powder (P < .05) for 30 days. Serum electrolytes remained unchanged, but their urinary levels increased. We observed dose- and time-dependent adverse effects of C citratus on CCr and eGFR. At a high dose or with prolonged treatment with a low dose, eGFR decrease may be followed by a decline in the other renal function indices. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Improving SysSim's Planetary Occurrence Rate Estimates
Ashby, Keir; Ragozzine, Darin; Hsu, Danley; Ford, Eric B.
2017-10-01
Kepler's catalog of thousands of transiting planet candidates enables statistical characterization of the underlying planet occurrence rates as a function of period and radius. Due to geometric factors and general noise in measurements, we know that many planets--especially those with a small-radius and/or long-period--were not observed by Kepler.To account for Kepler's detection criteria, Hsu et al. 2017 expanded on work in Lissuaer et al. 2011 to develop the Planetary System Simulator or "SysSim". SysSim uses a forward model to generate simulated catalogs of exoplanet systems, determine which of those simulated planets would have been seen by Kepler in the presence of uncertainties, and then compares those “observed planets” to those actually seen by Kepler. It then uses Approximate Bayesian Computation to infer the posterior probability distributions of the input parameters used to generate the forward model. In Hsu et al. 2017, we focused on matching the observed frequency of planets by solving for the underlying occurrence rate for each bin in a 2-dimensional grid of radius and period. After summarizing the results of Hsu et al. 2017, we show new results that investigate the effect on occurrence rates from including more accurate completeness products (from the Kepler DR25 analysis) into SysSim.
ESTIMATING RETURN RATE OF HIGHER EDUCATION FUND IN RUSSIA
Directory of Open Access Journals (Sweden)
Semenikhina V. A.
2014-06-01
Full Text Available Currently, the Russian government pays great attention to the field of higher and postgraduate education. But in the Russian scientific literature there are gaps related to the effectiveness of the overall evaluation of the higher education sector. The article dwells upon the problem of interregional income spread of the Russian population. Empirical estimator of difference influence accounting for human capital accumulated in Russian regions on wage levels and maximum increase of total wage levels and population income for 2001-2011 is carried out. Higher education, exceeding the influence of accumulated volume of the main funds, has a great influence on income spread in Russian regions. Besides, increase of higher education fund in Russian regions contributes to the population’s wage increase and growth in income, but at the same time it decreases legal wages. Results of the study extend knowledge of the economics of education of the Russian Federation.
Rates of convergence and asymptotic normality of curve estimators for ergodic diffusion processes
J.H. van Zanten (Harry)
2000-01-01
textabstractFor ergodic diffusion processes, we study kernel-type estimators for the invariant density, its derivatives and the drift function. We determine rates of convergence and find the joint asymptotic distribution of the estimators at different points.
Data on empirically estimated corporate survival rate in Russia.
Kuzmin, Evgeny A
2018-02-01
The article presents data on the corporate survival rate in Russia in 1991-2014. The empirical survey was based on a random sample with the average number of non-repeated observations (number of companies) for the survey each year equal to 75,958 (24,236 minimum and 126,953 maximum). The actual limiting mean error ∆ p was 2.24% with 99% integrity. The survey methodology was based on a cross joining of various formal periods in the corporate life cycles (legal and business), which makes it possible to talk about a conventionally active time life of companies' existence with a number of assumptions. The empirical survey values were grouped by Russian regions and industries according to the classifier and consolidated into a single database for analysing the corporate life cycle and their survival rate and searching for deviation dependencies in calculated parameters. Preliminary and incomplete figures were available in the paper entitled "Survival Rate and Lifecycle in Terms of Uncertainty: Review of Companies from Russia and Eastern Europe" (Kuzmin and Guseva, 2016) [3]. The further survey led to filtered processed data with clerical errors excluded. These particular values are available in the article. The survey intended to fill a fact-based gap in various fundamental surveys that involved matters of the corporate life cycle in Russia within the insufficient statistical framework. The data are of interest for an analysis of Russian entrepreneurship, assessment of the market development and incorporation risks in the current business environment. A further heuristic potential is achievable through an ability of forecasted changes in business demography and model building based on the representative data set.
A rapid method to estimate Westergren sedimentation rates.
Alexy, Tamas; Pais, Eszter; Meiselman, Herbert J
2009-09-01
The erythrocyte sedimentation rate (ESR) is a nonspecific but simple and inexpensive test that was introduced into medical practice in 1897. Although it is commonly utilized in the diagnosis and follow-up of various clinical conditions, ESR has several limitations including the required 60 min settling time for the test. Herein we introduce a novel use for a commercially available computerized tube viscometer that allows the accurate prediction of human Westergren ESR rates in as little as 4 min. Owing to an initial pressure gradient, blood moves between two vertical tubes through a horizontal small-bore tube and the top of the red blood cell (RBC) column in each vertical tube is monitored continuously with an accuracy of 0.083 mm. Using data from the final minute of a blood viscosity measurement, a sedimentation index (SI) was calculated and correlated with results from the conventional Westergren ESR test. To date, samples from 119 human subjects have been studied and our results indicate a strong correlation between SI and ESR values (R(2)=0.92). In addition, we found a close association between SI and RBC aggregation indices as determined by an automated RBC aggregometer (R(2)=0.71). Determining SI on human blood is rapid, requires no special training and has minimal biohazard risk, thus allowing physicians to rapidly screen for individuals with elevated ESR and to monitor therapeutic responses.
Global properties of symmetric competition models with riddling and blowout phenomena
Directory of Open Access Journals (Sweden)
Giant-italo Bischi
2000-01-01
Full Text Available In this paper the problem of chaos synchronization, and the related phenomena of riddling, blowout and on–off intermittency, are considered for discrete time competition models with identical competitors. The global properties which determine the different effects of riddling and blowout bifurcations are studied by the method of critical curves, a tool for the study of the global dynamical properties of two-dimensional noninvertible maps. These techniques are applied to the study of a dynamic market-share competition model.
Healthy addiction: blowouts inspire lifelong dedication to roles in dangerous dramas
Energy Technology Data Exchange (ETDEWEB)
Jaremko, G.
2001-05-01
The development of Safety Boss and Key Safety Blowout Control Ltd, based in Calgary and Red Deer, Alberta, respectively, is chronicled, serving as a background to a discussion of growth of oilwell blowout and emergency response services in the Canadian oilpatch. The rise to prominence began in 1982 with the blowout of the Lodgepole run-away well near Drayton Valley, Alberta, where very high geological pressures drove 300 million cubic feet of natural gas that contained 20 per cent lethal hydrogen sulphide, into the air. The expertise was further honed in Kuwait, where a team of Canadian blowout fighters extinguished hundreds of wells, set ablaze during the Gulf War in 1991. Today, this Canadian expertise is routinely exported to various foreign lands, including the United States, which until recently dominated the field. For example, Key Safety Blowout Control is a designated response organization for Alaska, rubbing shoulders with international giants in the Canadian Arctic, as part of the Mackenzie Delta Integrated Oil Field Services Group. About 60 per cent of Key Safety's bread and butter work is generated by sour gas. The need for blowout services is expected to grow over the next decade as the hunt for gas reaches farther into deep geological formations and the Rocky Mountain Foothills region. The high proportion of lethal hydrogen sulphide in these deep formations and the Foothills region invite escalating criticism from environmentalists, landowners and communities, protests, which in turn create growing demand for blowout services and preventive gear. On the positive side, the growing danger of blowouts and noxious discharges into the air stimulate the development of new technology, such as the new generation of lower explosive limit monitors that identify the presence of any gases liable to blow up. There is also a growing market for 'downwind monitoring units' that detect parts-per billion hydrogen sulphide concentrations. These and other
Estimating permissible /sup 129/I-emission rates
Energy Technology Data Exchange (ETDEWEB)
Huebschmann, W G
1976-06-01
A mathematical method of iodine release limitation is presented which, in assessing the radiological effectiveness of /sup 129/I, takes advantage of the fact that the chemical behaviour of /sup 129/I resembles that of /sup 131/I and relies on the already extensive knowledge of the chemical and biological behaviour of /sup 131/I. If this method is used for calculating permissible /sup 129/I emission rates it is stated that no unnecessary restrictions need be imposed on a fuel reprocessing plant and that the grazing season for the pasture-cow-milk pathway can be taken into account. The concept is currently in use at the Karlsruhe Nuclear Research Center and seems to be appropriate for licensing of nuclear fuel reprocessing plants.
Decision Tree Rating Scales for Workload Estimation: Theme and Variations
Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.
1984-01-01
The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.
Radioisotopic composition of yellowcake: an estimation of stack release rates
International Nuclear Information System (INIS)
Momeni, M.H.; Kisieleski, W.E.; Rayno, D.R.; Sabau, C.S.
1979-12-01
Uranium concentrate (yellowcake) composites from four mills (Anaconda, Kerr-McGee, Highland, and Uravan) were analyzed for U-238, U-235, U-234, Th-230, Ra-226, and Pb-210. The ratio of specific activities of U-238 to U-234 in the composites suggested that secular radioactive equilibrium exists in the ore. The average activity ratios in the yellowcake were determined to be 2.7 x 10 -3 (Th-230/U-238), 5 x 10 -4 (Ra-226/U-238) and 2 x 10 -4 (Pb-210/U-238). Based on earlier EPA measurements of the release rates from the stacks, the amount of yellowcake released was determined to be 0.1% of the amount processed
Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling
Ballal, Tarig
2014-09-01
In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Plikus, Iryna
2017-01-01
The subject of research is the current practice of determining the fair value of assets and liabilities at the present (discounted) cost. One of the most problematic places is the determination of the discount rate, which belongs to the jurisdiction of a professional accountant judgment.The methods of formalization, hypothetical assumption, system approach and scientific abstraction in substantiating the formation of accounting policy with respect to the choice of the discount rate are used i...
Common cause failure rate estimates for diesel generators in nuclear power plants
International Nuclear Information System (INIS)
Steverson, J.A.; Atwood, C.L.
1982-01-01
Common cause fault rates for diesel generators in nuclear power plants are estimated, using Licensee Event Reports for the years 1976 through 1978. The binomial failure rate method, used for obtaining the estimates, is briefly explained. Issues discussed include correct classification of common cause events, grouping of the events into homogeneous data subsets, and dealing with plant-to-plant variation
Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.
2000-01-01
A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data
Genetic analysis of rare disorders: Bayesian estimation of twin concordance rates
van den Berg, Stéphanie Martine; Hjelmborg, J.
2012-01-01
Twin concordance rates provide insight into the possibility of a genetic background for a disease. These concordance rates are usually estimated within a frequentistic framework. Here we take a Bayesian approach. For rare diseases, estimation methods based on asymptotic theory cannot be applied due
The effect of gas and oil well blowout emissions on livestock in Alberta
International Nuclear Information System (INIS)
Beck, B.E.
1992-01-01
Poisoning caused by emissions from sour gas well or oil well blowouts is not acute because the gases are diluted by the atmosphere before they reach livestock. Exposure may last a month or more and may produce a syndrome indistinguishable from common disorders of flu, malaise, mood change, and in the case of animals, lack of production or decreased production. Little information is available on the composition of releases from well blowouts, which may change due to concurrent reactions with oxygen and photodecomposition. Effects on livestock observed to results from sour gas plant emissions (mostly sulfur dioxide) include runny eyes in cattle, loss of production, diarrhea and abortion. Blowout emissions may contain oxidant gases as well as hydrogen sulfides. These products irritate mucous membranes, and can lead to pink eye. Respiratory problems may include upper respiratory tract infections, and may produce susceptibility to secondary pneumonia. Abortion, infertility and congenital effects are areas of concern. It is considered unlikely that hydrogen sulfide can cause such effects, however carbon disulfide and carbonyl sulfide, both present in sour gas blowouts, are known to have effects on the fetus. Effects on production and performance are unknown, and it is postulated that amounts of sulfur deposition are insufficient to cause nutrient deficiencies. Psychological reactions are suggested to explain some of the adverse effects of exposure to sour gas. 1 ref
Blow-out of nonpremixed turbulent jet flames at sub-atmospheric pressures
Wang, Qiang; Hu, Longhua; Chung, Suk-Ho
2016-01-01
Blow-out limits of nonpremixed turbulent jet flames in quiescent air at sub-atmospheric pressures (50–100 kPa) were studied experimentally using propane fuel with nozzle diameters ranging 0.8–4 mm. Results showed that the fuel jet velocity at blow-out limit increased with increasing ambient pressure and nozzle diameter. A Damköhler (Da) number based model was adopted, defined as the ratio of characteristic mixing time and characteristic reaction time, to include the effect of pressure considering the variations in laminar burning velocity and thermal diffusivity with pressure. The critical lift-off height at blow-out, representing a characteristic length scale for mixing, had a linear relationship with the theoretically predicted stoichiometric location along the jet axis, which had a weak dependence on ambient pressure. The characteristic mixing time (critical lift-off height divided by jet velocity) adjusted to the characteristic reaction time such that the critical Damköhler at blow-out conditions maintained a constant value when varying the ambient pressure.
Blow-out of nonpremixed turbulent jet flames at sub-atmospheric pressures
Wang, Qiang
2016-12-09
Blow-out limits of nonpremixed turbulent jet flames in quiescent air at sub-atmospheric pressures (50–100 kPa) were studied experimentally using propane fuel with nozzle diameters ranging 0.8–4 mm. Results showed that the fuel jet velocity at blow-out limit increased with increasing ambient pressure and nozzle diameter. A Damköhler (Da) number based model was adopted, defined as the ratio of characteristic mixing time and characteristic reaction time, to include the effect of pressure considering the variations in laminar burning velocity and thermal diffusivity with pressure. The critical lift-off height at blow-out, representing a characteristic length scale for mixing, had a linear relationship with the theoretically predicted stoichiometric location along the jet axis, which had a weak dependence on ambient pressure. The characteristic mixing time (critical lift-off height divided by jet velocity) adjusted to the characteristic reaction time such that the critical Damköhler at blow-out conditions maintained a constant value when varying the ambient pressure.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
Microcephaly Case Fatality Rate Associated with Zika Virus Infection in Brazil: Current Estimates.
Cunha, Antonio José Ledo Alves da; de Magalhães-Barbosa, Maria Clara; Lima-Setta, Fernanda; Medronho, Roberto de Andrade; Prata-Barbosa, Arnaldo
2017-05-01
Considering the currently confirmed cases of microcephaly and related deaths associated with Zika virus in Brazil, the estimated case fatality rate is 8.3% (95% confidence interval: 7.2-9.6). However, a third of the reported cases remain under investigation. If the confirmation rates of cases and deaths are the same in the future, the estimated case fatality rate will be as high as 10.5% (95% confidence interval: 9.5-11.7).
Directory of Open Access Journals (Sweden)
Romesh Silva
Full Text Available Given the lack of complete vital registration data in most developing countries, for many countries it is not possible to accurately estimate under-five mortality rates from vital registration systems. Heavy reliance is often placed on direct and indirect methods for analyzing data collected from birth histories to estimate under-five mortality rates. Yet few systematic comparisons of these methods have been undertaken. This paper investigates whether analysts should use both direct and indirect estimates from full birth histories, and under what circumstances indirect estimates derived from summary birth histories should be used.Usings Demographic and Health Surveys data from West Africa, East Africa, Latin America, and South/Southeast Asia, I quantify the differences between direct and indirect estimates of under-five mortality rates, analyze data quality issues, note the relative effects of these issues, and test whether these issues explain the observed differences. I find that indirect estimates are generally consistent with direct estimates, after adjustment for fertility change and birth transference, but don't add substantial additional insight beyond direct estimates. However, choice of direct or indirect method was found to be important in terms of both the adjustment for data errors and the assumptions made about fertility.Although adjusted indirect estimates are generally consistent with adjusted direct estimates, some notable inconsistencies were observed for countries that had experienced either a political or economic crisis or stalled health transition in their recent past. This result suggests that when a population has experienced a smooth mortality decline or only short periods of excess mortality, both adjusted methods perform equally well. However, the observed inconsistencies identified suggest that the indirect method is particularly prone to bias resulting from violations of its strong assumptions about recent mortality
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Directory of Open Access Journals (Sweden)
J. R. Dolan,
2005-01-01
Full Text Available According to a recent global analysis, microzooplankton grazing is surprisingly invariant, ranging only between 59 and 74% of phytoplankton primary production across systems differing in seasonality, trophic status, latitude, or salinity. Thus an important biological process in the world ocean, the daily consumption of recently fixed carbon, appears nearly constant. We believe this conclusion is an artefact because dilution experiments are 1 prone to providing over-estimates of grazing rates and 2 unlikely to furnish evidence of low grazing rates. In our view the overall average rate of microzooplankton grazing probably does not exceed 50% of primary production and may be even lower in oligotrophic systems.
Estimating Infiltration Rates for a Loessal Silt Loam Using Soil Properties
M. Dean Knighton
1978-01-01
Soil properties were related to infiltration rates as measured by single-ringsteady-head infiltometers. The properties showing strong simple correlations were identified. Regression models were developed to estimate infiltration rate from several soil properties. The best model gave fair agreement to measured rates at another location.
Estimation of the rate of energy production of rat mast cells in vitro
DEFF Research Database (Denmark)
Johansen, Torben
1983-01-01
Rat mast cells were treated with glycolytic and respiratory inhibitors. The rate of adenosine triphosphate depletion of cells incubated with both types of inhibitors and the rate of lactate produced in presence of antimycin A and glucose were used to estimate the rate of oxidative and glycolytic...
Clinical use of estimated glomerular filtration rate for evaluation of kidney function
DEFF Research Database (Denmark)
Broberg, Bo; Lindhardt, Morten; Rossing, Peter
2013-01-01
is a significant predictor for cardiovascular disease and may along with classical cardiovascular risk factors add useful information to risk estimation. Several cautions need to be taken into account, e.g. rapid changes in kidney function, dialysis, high age, obesity, underweight and diverging and unanticipated......Estimating glomerular filtration rate by the Modification of Diet in Renal Disease or Chronic Kidney Disease Epidemiology Collaboration formulas gives a reasonable estimate of kidney function for e.g. classification of chronic kidney disease. Additionally the estimated glomerular filtration rate...
International Nuclear Information System (INIS)
Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.
2008-01-01
In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
Estimation of uncertainty in tracer gas measurement of air change rates.
Iizuka, Atsushi; Okuizumi, Yumiko; Yanagisawa, Yukio
2010-12-01
Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of air change rate can be avoided. The proposed estimation method will be useful in practical ventilation measurements.
Directory of Open Access Journals (Sweden)
Woon Il Baek
2014-07-01
Full Text Available Background A blow-out fracture is one of the most common facial injuries in midface trauma. Orbital wall reconstruction is extremely important because it can cause various functional and aesthetic sequelae. Although many materials are available, there are no uniformly accepted guidelines regarding material selection for orbital wall reconstruction. Methods From January 2007 to August 2012, a total of 78 patients with blow-out fractures were analyzed. 36 patients received absorbable mesh plates, and 42 patients received titanium-dynamic mesh plates. Both groups were retrospectively evaluated for therapeutic efficacy and safety according to the incidence of three different complications: enophthalmos, extraocular movement impairment, and diplopia. Results For all groups (inferior wall fracture group, medial wall fractrue group, and combined inferomedial wall fracture group, there were improvements in the incidence of each complication regardless of implant types. Moreover, a significant improvement of enophthalmos occurred for both types of implants in group 1 (inferior wall fracture group. However, we found no statistically significant differences of efficacy or complication rate in every groups between both implant types. Conclusions Both types of implants showed good results without significant differences in long-term follow up, even though we expected the higher recurrent enophthalmos rate in patients with absorbable plate. In conclusion, both types seem to be equally effective and safe for orbital wall reconstruction. In particular, both implant types significantly improve the incidence of enophthalmos in cases of inferior orbital wall fractures.
Wang, Qiang; Hu, Longhua; Yoon, Sung Hwan; Lu, Shouxiang; Delichatsios, Michael; Chung, Suk-Ho
2015-01-01
The blow-out limits of nonpremixed turbulent jet flames in cross flows were studied, especially concerning the effect of ambient pressure, by conducting experiments at atmospheric and sub-atmospheric pressures. The combined effects of air flow
Laleg-Kirati, Taous-Meriem
2017-08-31
Method and System for providing estimates of Glucose Rate of Appearance from the intestine (GRA) using continuous glucose sensor measurements (CGS) taken from the subcutaneous of a diabetes patient and the amount of insulin administered to the patient.
Estimates of Radiation Dose Rates Near Large Diameter Sludge Containers in T Plant
Himes, D A
2002-01-01
Dose rates in T Plant canyon during the handling and storage of large diameter storage containers of K Basin sludge were estimated. A number of different geometries were considered from which most operational situations of interest can be constructed.
Estimation of evaporation rates over the Arabian Sea from Satellite data
Digital Repository Service at National Institute of Oceanography (India)
Rao, M.V.; RameshBabu, V.; Rao, L.V.G.; Sastry, J.S.
Utilizing both the SAMIR brightness temperatures of Bhaskara 2 and GOSSTCOMP charts of NOAA satellite series, the evaporation rates over the Arabian Sea for June 1982 are estimated through the bulk aerodynamic method. The spatial distribution...
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Laleg-Kirati, Taous-Meriem; Al-Matouq, Ali Ahmed
2017-01-01
Method and System for providing estimates of Glucose Rate of Appearance from the intestine (GRA) using continuous glucose sensor measurements (CGS) taken from the subcutaneous of a diabetes patient and the amount of insulin administered
On the estimate of the rate constant in the homogeneous dissolution model
Czech Academy of Sciences Publication Activity Database
Čupera, Jakub; Lánský, Petr
2013-01-01
Roč. 39, č. 10 (2013), s. 1555-1561 ISSN 0363-9045 Institutional support: RVO:67985823 Keywords : dissolution * estimation * rate constant Subject RIV: FR - Pharmacology ; Medidal Chemistry Impact factor: 2.006, year: 2013
International Nuclear Information System (INIS)
Cao Jinde
2004-01-01
In this Letter, the domain of attraction of memory patterns and exponential convergence rate of the network trajectories to memory patterns for Hopfield continuous associative memory are estimated by means of matrix measure and comparison principle. A new estimation is given for the domain of attraction of memory patterns and exponential convergence rate. These results can be used for the evaluation of fault-tolerance capability and the synthesis procedures for Hopfield continuous feedback associative memory neural networks
On researching erosion-corrosion wear in pipelines: the rate and residual lifetime estimation
International Nuclear Information System (INIS)
Baranenko, V.I.; Yanchenko, Yu.A.; Gulina, O.M.; Dokukin, D.A.
2010-01-01
To base the normative document on calculation of pipelines erosive-corrosive wear (ECW) rate and residual lifetime this research of ECW regularities for pearlitic steel NPP pipelines was performed. The estimates of control data treatment statistical procedures efficiency were presented. The influence of the scheme of piping control on the ECW rate and residual lifetime estimation results was demonstrated. The simplified scheme is valid only in case of complete information. It's usage under data uncertainties leads to essential residual lifetime overstating [ru
Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey
Chae, Hyeseon; Min, Kyungdoo; Youn, kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk
2014-01-01
Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. ...
Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging
DEFF Research Database (Denmark)
Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer
2016-01-01
This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...
Zeyl, C.; Visser, de J.A.G.M.
2001-01-01
The per-genome, per-generation rate of spontaneous mutation affecting fitness (U) and the mean fitness cost per mutation (s) are important parameters in evolutionary genetics, but have been estimated for few species. We estimated U and sh (the heterozygous effect of mutations) for two diploid yeast
An estimator for the relative entropy rate of path measures for stochastic differential equations
Energy Technology Data Exchange (ETDEWEB)
Opper, Manfred, E-mail: manfred.opper@tu-berlin.de
2017-02-01
We address the problem of estimating the relative entropy rate (RER) for two stochastic processes described by stochastic differential equations. For the case where the drift of one process is known analytically, but one has only observations from the second process, we use a variational bound on the RER to construct an estimator.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Shizgal, Bernie D.; Chikhaoui, Aziz
2006-06-01
The present paper considers a detailed analysis of the nonequilibrium effects for a model reactive system with the Chapman-Eskog (CE) solution of the Boltzmann equation as well as an explicit time dependent solution. The elastic cross sections employed are a hard sphere cross section and the Maxwell molecule cross section. Reactive cross sections which model reactions with and without activation energy are used. A detailed comparison is carried out with these solutions of the Boltzmann equation and the approximation introduced by Cukrowski and coworkers [J. Chem. Phys. 97 (1992) 9086; Chem. Phys. 89 (1992) 159; Physica A 188 (1992) 344; Chem. Phys. Lett. A 297 (1998) 402; Physica A 275 (2000) 134; Chem. Phys. Lett. 341 (2001) 585; Acta Phys. Polonica B 334 (2003) 3607.] based on the temperature of the reactive particles. We show that the Cukrowski approximation has limited applicability for the large class of reactive systems studied in this paper. The explicit time dependent solutions of the Boltzmann equation demonstrate that the CE approach is valid only for very slow reactions for which the corrections to the equilibrium rate coefficient are very small.
Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George
2008-01-01
Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.
International Nuclear Information System (INIS)
Overcamp, T.J.; Fjeld, R.A.
1987-01-01
A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution
Using field feedback to estimate failure rates of safety-related systems
International Nuclear Information System (INIS)
Brissaud, Florent
2017-01-01
The IEC 61508 and IEC 61511 functional safety standards encourage the use of field feedback to estimate the failure rates of safety-related systems, which is preferred than generic data. In some cases (if “Route 2_H” is adopted for the 'hardware safety integrity constraints”), this is even a requirement. This paper presents how to estimate the failure rates from field feedback with confidence intervals, depending if the failures are detected on-line (called 'detected failures', e.g. by automatic diagnostic tests) or only revealed by proof tests (called 'undetected failures'). Examples show that for the same duration and number of failures observed, the estimated failure rates are basically higher for “undetected failures” because, in this case, the duration observed includes intervals of time where it is unknown that the elements have failed. This points out the need of using a proper approach for failure rates estimation, especially for failures that are not detected on-line. Then, this paper proposes an approach to use the estimated failure rates, with their uncertainties, for PFDavg and PFH assessment with upper confidence bounds, in accordance with IEC 61508 and IEC 61511 requirements. Examples finally show that the highest SIL that can be claimed for a safety function can be limited by the 90% upper confidence bound of PFDavg or PFH. The requirements of the IEC 61508 and IEC 61511 relating to the data collection and analysis should therefore be properly considered for the study of all safety-related systems. - Highlights: • This paper deals with requirements of the IEC 61508 and IEC 61511 for using field feedback to estimate failure rates of safety-related systems. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are detected on-line. • This paper presents how to estimate the failure rates from field feedback with confidence intervals for failures that are only revealed by
Dequanter, D; Shahla, M; Paulus, P; Aubert, C; Lothaire, P
2013-12-01
Carotid blowout syndrome is a rare but devastating complication in patients with head and neck malignancy, and is associated with high morbidity and mortality. Bleeding from the carotid artery or its branches is a well-recognized complication following treatment or recurrence of head and neck cancer. It is an emergency situation, and the classical approach to save the patient's life is to ligate the carotid artery. But the surgical treatment is often technically difficult. Endovascular therapies were recently reported as good alternatives to surgical ligation. Retrospective review of three cases of acute or threatened carotid hemorrhage managed by endovascular therapies. Two patients presented with acute carotid blowout, and one patient with a sentinel bleed. Two patients had previously been treated with surgery and chemo radiation. One patient was treated by chemo radiation. Two had developed pharyngocutaneous fistulas, and one had an open necrosis filled wound that surrounded the carotid artery. In two patients, stent placement resolved the acute hemorrhage. In one patient, superselective embolization was done. Mean duration follow-up was 10.2 months. No patient had residual sequelae of stenting or embolization. Management of carotid blow syndrome is very critical and difficult. A multidisciplinary approach is very important in the management of carotid blow syndrome. Correct and suitable management can be life saving. An endovascular technique is a good and effective alternative with much lower morbidity rates than surgical repair or ligation. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Directory of Open Access Journals (Sweden)
Debangshu Dey
2015-03-01
Full Text Available Lean or ultralean combustion is one of the popular strategies to achieve very low emission levels. However, it is extremely susceptible to lean blow-out (LBO. The present work explores a Cross-wavelet transform (XWT aided rule based scheme for early prediction of lean blowout. XWT can be considered as an advancement of wavelet analysis which gives correlation between two waveforms in time-frequency space. In the present scheme a swirl-stabilized dump combustor is used as a laboratory-scale model of a generic gas turbine combustor with LPG as fuel. Various time series data of CH chemiluminescence signal are recorded for different flame conditions by varying equivalence ratio, flow rate and level of air-fuel premixing. Some features are extracted from the cross-wavelet spectrum of the recorded waveforms and a reference wave. The extracted features are observed to classify the flame condition into three major classes: near LBO, moderate and healthy. Moreover, a Rough Set based technique is also applied on the extracted features to generate a rule base so that it can be fed to a real time controller or expert system to take necessary control action to prevent LBO. Results show that the proposed methodology performs with an acceptable degree of accuracy.
International Nuclear Information System (INIS)
Carlsen, Ove
2004-01-01
The purpose of this study was to design an alternative and robust method for estimation of glomerular filtration rate (GFR) in [ 99 mTc]-diethylenetriaminepentaacetic acid ([ 99 mTc] -DTPA renography with a reliability not significantly lower than that of the conventional Gates' method. Methods: The method is based on renographies lasting 40 min in which regions of interest (ROIs) are manually created over selected parts of certain blood pools (e.g. heart, lungs, spleen, and liver). For each ROI the corresponding time-activity curve (TAC) was generated, decay corrected and exposed to a monoexponential fit in the time interval 10 to 40 min postinjection. The rate constant in min-1 of the monoexponential fit was denoted BETA. Following an iterative procedure comprising usually 5-10 manually created ROIs, the monoexponential fit with the maximum rate constant (BETA max ) was used for estimation of GFR. Results: In a patient material of 54 adult subjects in whom GFR was determined with multiple or one sample techniques with [ 51 Cr]-ethylenediaminetetraacetic acid ([ 51 Cr]-EDTA) the regression curve of standard GFR (GFR std ) (i.e. GFR adjusted to 1.73 m 2 body surface area) showed a close, non-linear relationship with BETA max with a correlation coefficient of 95%. The standard errors of estimate (SEE) were 6.6, 10.6 and 16.8 for GFR std equal to 30, 60, and 120 ml/(min .73 m 2 ), respectively. The corresponding SEE values for almost the same patient material using Gates' method were 8.4, 11.9, and 16.8 ml/(min 1.73 m 2 ). Conclusions: The alternative rate constant method yields estimates of GFR std with SEE values equal to or slightly smaller than in Gates' method. The two methods provide statistically uncorrelated estimates of GFR std . Therefore, pooled estimates of GFR std can be calculated with SEE values approximately 1.41 times smaller than those mentioned above. The reliabilities of the pooled estimate of GFR std separately and of the multiple samples method
Application on forced traction test in surgeries for orbital blowout fracture
Directory of Open Access Journals (Sweden)
Bao-Hong Han
2014-05-01
Full Text Available AIM: To discuss the application of forced traction test in surgeries for orbital blowout fracture.METHODS: The clinical data of 28 patients with reconstructive surgeries for orbital fracture were retrospectively analyzed. All patients were treated with forced traction test before/in/after operation. The eyeball movement and diplopia were examined and recorded pre-operation, 3 and 6mo after operation, respectively.RESULTS: Diplopia was improved in all 28 cases with forced traction test. There was significant difference between preoperative and post-operative diplopia at 3 and 6mo(PCONCLUSION: Forced traction test not only have a certain clinical significance in diagnosis of orbital blowout fracture, it is also an effective method in improving diplopia before/in/after operation.
Toni Antikainen; Anti Rohumaa; Christopher G. Hunt; Mari Levirinne; Mark Hughes
2015-01-01
In plywood production, human operators find it difficult to precisely monitor the spread rate of adhesive in real-time. In this study, macroscopic fluorescence was used to estimate spread rate (SR) of urea formaldehyde adhesive on birch (Betula pendula Roth) veneer. This method could be an option when developing automated real-time SR measurement for...
How to efficiently obtain accurate estimates of flower visitation rates by pollinators
Fijen, Thijs P.M.; Kleijn, David
2017-01-01
Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.
Biermans, M.C.J.; Verheij, R.A.; Bakker, D.H. de; Zielhuis, G.A.; Vries Robbé, P.F. de
2008-01-01
Objectives: In this study, we evaluated the internal validity of EPICON, an application for grouping ICPCcoded diagnoses from electronic medical records into episodes of care. These episodes are used to estimate morbidity rates in general practice. Methods: Morbidity rates based on EPICON were
Constrained least squares methods for estimating reaction rate constants from spectroscopic data
Bijlsma, S.; Boelens, H.F.M.; Hoefsloot, H.C.J.; Smilde, A.K.
2002-01-01
Model errors, experimental errors and instrumental noise influence the accuracy of reaction rate constant estimates obtained from spectral data recorded in time during a chemical reaction. In order to improve the accuracy, which can be divided into the precision and bias of reaction rate constant
Use of Pyranometers to Estimate PV Module Degradation Rates in the Field: Preprint
Energy Technology Data Exchange (ETDEWEB)
Vignola, Frank; Peterson, Josh; Kessler, Rich; Mavromatakis, Fotis; Dooraghi, Mike; Sengupta, Manajit
2016-08-01
This paper describes a methodology that uses relative measurements to estimate the degradation rates of PV modules in the field. The importance of calibration and cleaning is illustrated. The number of years of field measurements needed to measure degradation rates with data from the field is cut in half using relative comparisons.
Environmental isotope balance of Lake Kinneret as a tool in evaporation rate estimation
International Nuclear Information System (INIS)
Lewis, S.
1979-01-01
The balance of environmental isotopes in Lake Kinneret has been used to obtain an independent estimate of the mean monthly evaporation rate. Direct calculation was precluded by the inadequacy of the isotope data in uniquely representing the system behaviour throughout the annual cycle. The approach adopted uses an automatic algorithm to seek an objective best fit of the isotope balance model to measured oxygen-18 data by optimizing the evaporation rate as a parameter. To this end, evaporation is described as a periodic function with two parameters. The sensitivity of the evaporation rate estimates to parameter uncertainty and data errors is stressed. Error analysis puts confidence limits on the estimates obtained. Projected improvements in data collection and analysis show that a significant reduction in uncertainty can be realized. Relative to energy balance estimates, currently obtainable data result in about 30% uncertainty. The most optimistic scenario would yield about 15% relative uncertainty. (author)
Fill rate estimation in periodic review policies with lost sales using simple methods
Energy Technology Data Exchange (ETDEWEB)
Cardós, M.; Guijarro Tarradellas, E.; Babiloni Griñón, E.
2016-07-01
Purpose: The exact estimation of the fill rate in the lost sales case is complex and time consuming. However, simple and suitable methods are needed for its estimation so that inventory managers could use them. Design/methodology/approach: Instead of trying to compute the fill rate in one step, this paper focuses first on estimating the probabilities of different on-hand stock levels so that the fill rate is computed later. Findings: As a result, the performance of a novel proposed method overcomes the other methods and is relatively simple to compute. Originality/value: Existing methods for estimating stock levels are examined, new procedures are proposed and their performance is assessed.
On the estimation of failure rates for living PSAs in the presence of model uncertainty
International Nuclear Information System (INIS)
Arsenis, S.P.
1994-01-01
The estimation of failure rates of heterogeneous Poisson components from data on times operated to failures is reviewed. Particular emphasis is given to the lack of knowledge on the form of the mixing distribution or population variability curve. A new nonparametric epirical Bayes estimation is proposed which generalizes the estimator of Robbins for different times of observations for the components. The behavior of the estimator is discussed by reference to two samples typically drawn from the CEDB, a component event database designed and operated by the Ispra JRC
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Zollanvari, Amin; Genton, Marc G.
2013-01-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Estimates of particle- and thorium-cycling rates in the northwest Atlantic Ocean
International Nuclear Information System (INIS)
Murnane, R.J.; Sarmiento, J.L.; Cochran, J.K.
1994-01-01
The authors provide least squares estimates of particle-cycling rate constants and their errors at 13 depths in the Northwest Atlantic Ocean using a compilation of published results and conservation equations for thorium and particle cycling. The predicted rates of particle aggregation and disaggregation vary through the water column. The means and standard deviations, based on lognormal probability distributions, for the lowest and highest rates of aggregation (β 2 ) and disaggregation (β -2 ) in the water column are 8±27 y -1 2 -1 , and 580±2000 y -1 -2 3 ±10 4 y -1 . Median values for these rates are 2.1 y -1 2 -1 , and 149 y -1 -2 -1 . Predicted rate constants for thorium adsorption (k 1 = 5.0±1.0x10 4 m 3 kg -1 y -1 ) and desorption (k -1 = 3.1±1.5 y -1 ) are consistent with previous estimates. Least squares estimates of the sum of the time dependence and transport terms from the particle and thorium conservation equations are on the same order as other terms in the conservation equations. Forcing this sum to equal zero would change the predicted rates. Better estimates of the time dependence of thorium activities and particle concentrations and of the concentration and flux of particulate organic matter would help to constrain estimates of β 2 and β -2 . 46 refs., 8 figs., 5 tabs
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar
Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by
An Innovative Oil Pollution Containment Method for Ship Wrecks Proposed for Offshore Well Blow-outs
ANDRITSOS Fivos; COJINS Hans
2011-01-01
In the aftermath of the PRESTIGE disaster, an innovative system for the prompt intervention on oil pollution sources (primarily ship wrecks) at great depths was conceived at the Joint Research Center of the European Commission. This system, with some re-engineering, could also serve for collecting oil and gas leaking after an offshore well blow-out and could constitute a reference method for prompt intervention on deep water oil pollution sources like ship wrecks and blown-out offshore wells....
Kinetic magnetic resonance imaging of orbital blowout fracture with restricted ocular movement
International Nuclear Information System (INIS)
Totsuka, Nobuyoshi; Koide, Ryouhei; Inatomi, Makoto; Fukado, Yoshinao; Hisamatsu, Katsuji.
1992-01-01
We analyzed the mechanism of gaze limitation in blowout fracture in 19 patients by means of kinetic magnetic resonance imaging (MRI). We could identify herniation of fat tissue and rectus muscles with connective tissue septa in 11 eyes. Depressed rectus muscles were surrounded by fat tissue. In no instance was the rectus muscle actually incarcerated. Entrapped connective tissue septa seemed to prevent movement of affected rectus muscle. We occasionally observed incarcerated connective tissue septa to restrict motility of the optic nerve. (author)
Kassie L. Tilini; Susan E. Meyer; Phil S. Allen
2016-01-01
This study established that chilling removes primary seed dormancy in 2 rare penstemons of the western US, Gibbensâ beardtongue (Penstemon gibbensii Dorn [Scrophulariaceae]) and blowout penstemon (Penstemon haydenii S. Watson). Wild-harvested seeds were subjected either to moist chilling at 2 to 4 Â°C (36-39 Â°F) for 0, 4, 8, 12, and 16 wk or to approximately 2 y of dry...
Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements
Pedro Mello Paiva; Alexandre Nunes Barreto; Jader Lugon Junior; Leticia Ferraço de Campos
2016-01-01
This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers sim...
Estimating rate of occurrence of rare events with empirical bayes: A railway application
International Nuclear Information System (INIS)
Quigley, John; Bedford, Tim; Walls, Lesley
2007-01-01
Classical approaches to estimating the rate of occurrence of events perform poorly when data are few. Maximum likelihood estimators result in overly optimistic point estimates of zero for situations where there have been no events. Alternative empirical-based approaches have been proposed based on median estimators or non-informative prior distributions. While these alternatives offer an improvement over point estimates of zero, they can be overly conservative. Empirical Bayes procedures offer an unbiased approach through pooling data across different hazards to support stronger statistical inference. This paper considers the application of Empirical Bayes to high consequence low-frequency events, where estimates are required for risk mitigation decision support such as as low as reasonably possible. A summary of empirical Bayes methods is given and the choices of estimation procedures to obtain interval estimates are discussed. The approaches illustrated within the case study are based on the estimation of the rate of occurrence of train derailments within the UK. The usefulness of empirical Bayes within this context is discussed
Do central banks respond to exchange rate movements? Some new evidence from structural estimation
Wei Dong
2013-01-01
This paper investigates the impact of exchange rate movements on the conduct of monetary policy in Australia, Canada, New Zealand and the United Kingdom. We develop and estimate a structural general equilibrium two-sector model with sticky prices and wages and limited exchange rate pass-through. Different specifications for the monetary policy rule and the real exchange rate process are examined. The results indicate that the Reserve Bank of Australia, the Bank of Canada and the Bank of Engla...
A CONSISTENT ESTIMATE FOR THE IMPACT OF SINGAPORE'S EXCHANGE RATE ON COMPETITIVENESS
JANG PING THIA
2010-01-01
Services form a larger part of the Singapore economy. However, it is difficult to analyze the exchange rate impact on services due to the lack of price data. Regression of output or export on exchange rate, while highly intuitive, is likely to suffer from the endogeneity problem since Singapore's exchange rate is used as a counter-cyclical policy tool. This results in inconsistent estimates. I propose a novel approach to overcome these limitations by using Hong Kong as a control for Singapore...
Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements
Directory of Open Access Journals (Sweden)
Pedro Mello Paiva
2016-12-01
Full Text Available This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers simplification of the spill on the surface, even in the well blowout scenario. Efforts to better understand the oil and gas behavior in the water column and three-dimensional modeling of the trajectory gained strength after the Deepwater Horizon spill in 2010 in the Gulf of Mexico. The data collected and the observations made during the accident were widely used for adjustment of the models, incorporating various factors related to hydrodynamic forcing and weathering processes to which the hydrocarbons are subjected during subsurface leaks. The difficulties show to be even more challenging in the case of blowouts in deep waters, where the uncertainties are still larger. The studies addressed different variables to make adjustments of oil and gas dispersion models along the upward trajectory. Factors that exert strong influences include: speed of the subsurface currents; gas separation from the main plume; hydrate formation, dissolution of oil and gas droplets; variations in droplet diameter; intrusion of the droplets at intermediate depths; biodegradation; and appropriate parametrization of the density, salinity and temperature profiles of water through the column.
Tracking the Hercules 265 marine gas well blowout in the Gulf of Mexico
Romero, Isabel C.; Özgökmen, Tamay; Snyder, Susan; Schwing, Patrick; O'Malley, Bryan J.; Beron-Vera, Francisco J.; Olascoaga, Maria J.; Zhu, Ping; Ryan, Edward; Chen, Shuyi S.; Wetzel, Dana L.; Hollander, David; Murawski, Steven A.
2016-01-01
On 23 July 2013, a marine gas rig (Hercules 265) ignited in the northern Gulf of Mexico. The rig burned out of control for 2 days before being extinguished. We conducted a rapid-response sampling campaign near Hercules 265 after the fire to ascertain if sediments and fishes were polluted above earlier baseline levels. A surface drifter study confirmed that surface ocean water flowed to the southeast of the Hercules site, while the atmospheric plume generated by the blowout was in eastward direction. Sediment cores were collected to the SE of the rig at a distance of ˜0.2, 8, and 18 km using a multicorer, and demersal fishes were collected from ˜0.2 to 8 km SE of the rig using a longline (508 hooks). Recently deposited sediments document that only high molecular weight (HMW) polycyclic aromatic hydrocarbon (PAH) concentrations decreased with increasing distance from the rig suggesting higher pyrogenic inputs associated with the blowout. A similar trend was observed in the foraminifera Haynesina germanica, an indicator species of pollution. In red snapper bile, only HMW PAH metabolites increased in 2013 nearly double those from 2012. Both surface sediments and fish bile analyses suggest that, in the aftermath of the blowout, increased concentration of pyrogenically derived hydrocarbons was transported and deposited in the environment. This study further emphasizes the need for an ocean observing system and coordinated rapid-response efforts from an array of scientific disciplines to effectively assess environmental impacts resulting from accidental releases of oil contaminants.
Estimation of Leak Rate Through Cracks in Bimaterial Pipes in Nuclear Power Plants
Directory of Open Access Journals (Sweden)
Jai Hak Park
2016-10-01
Full Text Available The accurate estimation of leak rate through cracks is crucial in applying the leak before break (LBB concept to pipeline design in nuclear power plants. Because of its importance, several programs were developed based on the several proposed flow models, and used in nuclear power industries. As the flow models were developed for a homogeneous pipe material, however, some difficulties were encountered in estimating leak rates for bimaterial pipes. In this paper, a flow model is proposed to estimate leak rate in bimaterial pipes based on the modified Henry–Fauske flow model. In the new flow model, different crack morphology parameters can be considered in two parts of a flow path. In addition, based on the proposed flow model, a program was developed to estimate leak rate for a crack with linearly varying cross-sectional area. Using the program, leak rates were calculated for through-thickness cracks with constant or linearly varying cross-sectional areas in a bimaterial pipe. The leak rate results were then compared and discussed in comparison with the results for a homogeneous pipe. The effects of the crack morphology parameters and the variation in cross-sectional area on the leak rate were examined and discussed.
High mitochondrial mutation rates estimated from deep-rooting Costa Rican pedigrees
Madrigal, Lorena; Melendez-Obando, Mauricio; Villegas-Palma, Ramon; Barrantes, Ramiro; Raventos, Henrieta; Pereira, Reynaldo; Luiselli, Donata; Pettener, Davide; Barbujani, Guido
2012-01-01
Estimates of mutation rates for the noncoding hypervariable Region I (HVR-I) of mitochondrial DNA (mtDNA) vary widely, depending on whether they are inferred from phylogenies (assuming that molecular evolution is clock-like) or directly from pedigrees. All pedigree-based studies so far were conducted on populations of European origin. In this paper we analyzed 19 deep-rooting pedigrees in a population of mixed origin in Costa Rica. We calculated two estimates of the HVR-I mutation rate, one considering all apparent mutations, and one disregarding changes at sites known to be mutational hot spots and eliminating genealogy branches which might be suspected to include errors, or unrecognized adoptions along the female lines. At the end of this procedure, we still observed a mutation rate equal to 1.24 × 10−6, per site per year, i.e., at least three-fold as high as estimates derived from phylogenies. Our results confirm that mutation rates observed in pedigrees are much higher than estimated assuming a neutral model of long-term HVRI evolution. We argue that, until the cause of these discrepancies will be fully understood, both lower estimates (i.e., those derived from phylogenetic comparisons) and higher, direct estimates such as those obtained in this study, should be considered when modeling evolutionary and demographic processes. PMID:22460349
Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.
Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk
2017-03-01
Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Jin; Chang, Yoon Suk; Lee, Dock Jin; Lee, Tae Rin; Choi, Shin Beom; Jeong, Jae Uk; Yeum, Seung Won [Sungkyunkwan University, Seoul (Korea, Republic of)
2009-02-15
In this research project, a leak rate estimation model was developed for steam generator tubes with through wall cracks. The modelling was based on the leak data from 23 tube specimens. Also, the procedure of finite element analysis was developed for residual stress calculation of dissimilar metal weld in a bottom mounted instrumentation. The effect of geometric variables related with the residual stress in penetration weld part was investigated by using the developed analysis procedure. The key subjects dealt in this research are: 1. Development of leak rate estimation model for steam generator tubes with through wall cracks 2. Development of the program which can perform the structure and leakage integrity evaluation for steam generator tubes 3. Development of analysis procedure for bottom mounted instrumentation weld residual stress 4. Analysis on the effects of geometric variables on weld residual stress It is anticipated that the technologies developed in this study are applicable for integrity estimation of steam generator tubes and weld part in NPP.
A Bayesian framework to estimate diversification rates and their variation through time and space
Directory of Open Access Journals (Sweden)
Silvestro Daniele
2011-10-01
Full Text Available Abstract Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae and Lupinus (Fabaceae. In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.
Jannati, Ali; McDonald, John J; Di Lollo, Vincent
2015-06-01
The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).
International Nuclear Information System (INIS)
Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu
2017-01-01
Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)
2017-03-05
Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.
Estimation of Uncertainty in Tracer Gas Measurement of Air Change Rates
Directory of Open Access Journals (Sweden)
Atsushi Iizuka
2010-12-01
Full Text Available Simple and economical measurement of air change rates can be achieved with a passive-type tracer gas doser and sampler. However, this is made more complex by the fact many buildings are not a single fully mixed zone. This means many measurements are required to obtain information on ventilation conditions. In this study, we evaluated the uncertainty of tracer gas measurement of air change rate in n completely mixed zones. A single measurement with one tracer gas could be used to simply estimate the air change rate when n = 2. Accurate air change rates could not be obtained for n ≥ 2 due to a lack of information. However, the proposed method can be used to estimate an air change rate with an accuracy of
Estimation of age-specific rates of reactivation and immune boosting of the varicella zoster virus
Directory of Open Access Journals (Sweden)
Isabella Marinelli
2017-06-01
Full Text Available Studies into the impact of vaccination against the varicella zoster virus (VZV have increasingly focused on herpes zoster (HZ, which is believed to be increasing in vaccinated populations with decreasing infection pressure. This idea can be traced back to Hope-Simpson's hypothesis, in which a person's immune status determines the likelihood that he/she will develop HZ. Immunity decreases over time, and can be boosted by contact with a person experiencing varicella (exogenous boosting or by a reactivation attempt of the virus (endogenous boosting. Here we use transmission models to estimate age-specific rates of reactivation and immune boosting, exogenous as well as endogenous, using zoster incidence data from the Netherlands (2002–2011, n = 7026. The boosting and reactivation rates are estimated with splines, enabling these quantities to be optimally informed by the data. The analyses show that models with high levels of exogenous boosting and estimated or zero endogenous boosting, constant rate of loss of immunity, and reactivation rate increasing with age (to more than 5% per year in the elderly give the best fit to the data. Estimates of the rates of immune boosting and reactivation are strongly correlated. This has important implications as these parameters determine the fraction of the population with waned immunity. We conclude that independent evidence on rates of immune boosting and reactivation in persons with waned immunity are needed to robustly predict the impact of varicella vaccination on the incidence of HZ.
Rizvi, Farheen
2013-01-01
A report describes a model that estimates the orientation of the backup reaction wheel using the reaction wheel spin rates telemetry from a spacecraft. Attitude control via the reaction wheel assembly (RWA) onboard a spacecraft uses three reaction wheels (one wheel per axis) and a backup to accommodate any wheel degradation throughout the course of the mission. The spacecraft dynamics prediction depends upon the correct knowledge of the reaction wheel orientations. Thus, it is vital to determine the actual orientation of the reaction wheels such that the correct spacecraft dynamics can be predicted. The conservation of angular momentum is used to estimate the orientation of the backup reaction wheel from the prime and backup reaction wheel spin rates data. The method is applied in estimating the orientation of the backup wheel onboard the Cassini spacecraft. The flight telemetry from the March 2011 prime and backup RWA swap activity on Cassini is used to obtain the best estimate for the backup reaction wheel orientation.
ESR dating of elephant teeth and radiation dose rate estimation in soil
International Nuclear Information System (INIS)
Taisoo Chong; Ohta, Hiroyuki; Nakashima, Yoshiyuki; Iida, Takao; Saisho, Hideo
1989-01-01
Chemical analysis of 238 U, 232 Th and 40 K in the dentine as well as enamel of elephant tooth fossil has been carried out in order to estimate the internal absorbed dose rate of the specimens, which was estimated to be (39±4) mrad/y on the assumption of early uptake model of radionuclides. The external radiation dose rate in the soil including the contribution from cosmic rays was also estimated to be (175±18) mrad/y with the help of γ-ray spectroscopic techniques of the soil samples in which the specimens were buried. The 60 Co γ-ray equivalent accumulated dose of (2±0.2) x 10 4 rad for the tooth enamel gave ''ESR age'' of (9±2) x 10 4 y, which falls in the geologically estimated range between 3 x 10 4 and 30 x 10 4 y before the present. (author)
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Busquets-Vass, Geraldine; Newsome, Seth D.; Calambokidis, John; Serra-Valente, Gabriela; Jacobsen, Jeff K.; Aguíñiga-García, Sergio; Gendron, Diane
2017-01-01
Stable isotope analysis in mysticete skin and baleen plates has been repeatedly used to assess diet and movement patterns. Accurate interpretation of isotope data depends on understanding isotopic incorporation rates for metabolically active tissues and growth rates for metabolically inert tissues. The aim of this research was to estimate isotopic incorporation rates in blue whale skin and baleen growth rates by using natural gradients in baseline isotope values between oceanic regions. Nitrogen (δ15N) and carbon (δ13C) isotope values of blue whale skin and potential prey were analyzed from three foraging zones (Gulf of California, California Current System, and Costa Rica Dome) in the northeast Pacific from 1996–2015. We also measured δ15N and δ13C values along the lengths of baleen plates collected from six blue whales stranded in the 1980s and 2000s. Skin was separated into three strata: basale, externum, and sloughed skin. A mean (±SD) skin isotopic incorporation rate of 163±91 days was estimated by fitting a generalized additive model of the seasonal trend in δ15N values of skin strata collected in the Gulf of California and the California Current System. A mean (±SD) baleen growth rate of 15.5±2.2 cm y-1 was estimated by using seasonal oscillations in δ15N values from three whales. These oscillations also showed that individual whales have a high fidelity to distinct foraging zones in the northeast Pacific across years. The absence of oscillations in δ15N values of baleen sub-samples from three male whales suggests these individuals remained within a specific zone for several years prior to death. δ13C values of both whale tissues (skin and baleen) and potential prey were not distinct among foraging zones. Our results highlight the importance of considering tissue isotopic incorporation and growth rates when studying migratory mysticetes and provide new insights into the individual movement strategies of blue whales. PMID:28562625
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
Lichen forage ingestion rates of free-roaming caribou estimated with fallout cesium-137
International Nuclear Information System (INIS)
Hanson, W.C.; Whicker, F.W.; Lipscomb, J.F.
1975-01-01
Lichen forage ingestion rates of free-roaming caribou herds in northern Alaska during 1963 to 1970 were estimated by applying a two-compartment, eight parameter cesium-137 kinetics model to measured fallout 137 Cs concentrations in lichen and caribou. Estimates for winter equilibrium periods (January to April) for each year ranged from 3.7 to 6.9 kg dry weight lichens per day for adult female caribou. Further refinement of these estimations were obtained by calculating probabilistic distributions of intake rates by stochastic processes based upon the mean and standard error intervals of the eight parameters during 1965 and 1968. A computer program generated 1,000 randomly sampled values within each of the eight parameter distributions. Results substantiate the contention that lichen forage ingestion rates by free-roaming caribou are significantly greater than previously held
Omer, Muhammad
2012-07-01
This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.
Estimation of water erosion rates using RUSLE3D in Alicante province (Spain)
Garcia Rodríguez, Jose Luis; Giménez Suárez, Martín Cruz; Arraiza Bermudez-Cañete, Maria Paz
2015-01-01
The purpose of this study was the estimation of current and potential water erosion rates in Alicante Province using RUSLE3D (Revised Universal Soil Loss Equation-3D) model with Geographical Information System (GIS) support by request from the Valencia Waste Energy Use. RUSLE3D uses a new methodology for topographic factor estimation (LS factor) based on the impact of flow convergence allowing better assessment of sediment distribution detached by water erosion. In RUSLE3D equation, the effec...
Optimization of hierarchical 3DRS motion estimators for picture rate conversion
Heinrich, A.; Bartels, C.L.L.; Vleuten, van der, R.J.; Cordes, C.N.; Haan, de, G.
2010-01-01
There is a continuous pressure to lower the implementation complexity and improve the quality of motion-compensated picture rate conversion methods. Since the concept of hierarchy can be advantageously applied to many motion estimation methods, we have extended and improved the current state-of-the-art motion estimation method in this field, 3-Dimensional Recursive Search (3DRS), with this concept. We have explored the extensive parameter space and present an analysis of the importance and in...
Carvalho, Marcelo Henrique de; Von Zuben, Claudio José
2006-01-01
The objective of this work was to evaluate some aspects of the populational ecology of Chrysomya megacephala, analyzing demographic aspects of adults kept under experimental conditions. Cages of C. megacephala adults were prepared with four different larval densities (100, 200, 400 and 800). For each cage, two tables were made: one with demographic parameters for the life expectancy estimate at the initial age (e0), and another with the reproductive rate and average reproduction age estimates...
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or re...
Estimating the Effective Lower Bound for the Czech National Bank's Policy Rate
Kolcunova, Dominika; Havranek, Tomas
2018-01-01
The paper focuses on the estimation of the effective lower bound for the Czech National Bank's policy rate. The effective lower bound is determined by the value below which holding and using cash would be more convenient than deposits with negative yields. This bound is approximated based on storage, the insurance and transportation costs of cash and the costs associated with the loss of the convenience of cashless payments and complemented with the estimate based on interest charges, which p...
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Using 210Pb measurements to estimate sedimentation rates on river floodplains
International Nuclear Information System (INIS)
Du, P.; Walling, D.E.
2012-01-01
Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides 137 Cs and excess 210 Pb to estimate medium-term (10–10 2 years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of 137 Cs. However, the use of excess 210 Pb potentially offers a number of advantages over 137 Cs measurements. Most existing investigations that have used excess 210 Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess 210 Pb and 137 Cs were made on these cores. The 210 Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The 137 Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the 210 Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total 210 Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by the 137 Cs and excess 210 Pb
DEFF Research Database (Denmark)
Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin
2017-01-01
the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....
Modeling and estimating the jump risk of exchange rates: Applications to RMB
Wang, Yiming; Tong, Hanfei
2008-11-01
In this paper we propose a new type of continuous-time stochastic volatility model, SVDJ, for the spot exchange rate of RMB, and other foreign currencies. In the model, we assume that the change of exchange rate can be decomposed into two components. One is the normally small-cope innovation driven by the diffusion motion; the other is a large drop or rise engendered by the Poisson counting process. Furthermore, we develop a MCMC method to estimate our model. Empirical results indicate the significant existence of jumps in the exchange rate. Jump components explain a large proportion of the exchange rate change.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Liang Jinling; Lam, James; Wang Zidong
2009-01-01
This Letter is concerned with the robust state estimation problem for uncertain time-delay Markovian jumping genetic regulatory networks (GRNs) with SUM logic, where the uncertainties enter into both the network parameters and the mode transition rate. The nonlinear functions describing the feedback regulation are assumed to satisfy the sector-like conditions. The main purpose of the problem addressed is to design a linear estimator to approximate the true concentrations of the mRNA and protein through available measurement outputs. By resorting to the Lyapunov functional method and some stochastic analysis tools, it is shown that if a set of linear matrix inequalities (LMIs) is feasible, the desired state estimator, that can ensure the estimation error dynamics to be globally robustly asymptotically stable in the mean square, exists. The obtained LMI conditions are dependent on both the lower and the upper bounds of the delays. An illustrative example is presented to demonstrate the feasibility of the proposed estimation schemes.
International Nuclear Information System (INIS)
Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K
2009-01-01
We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations
Estimation of exposure to 222Rn from the excretion rates of 21πPb
International Nuclear Information System (INIS)
Holtzman, R.B.; Rundo, J.
1981-01-01
A model is proposed with which estimates of exposure to 227 Rn and its daughter products may be made from urinary excretion rates of 210 Pb. It is assumed that 20% of all the 210 Pb inhaled reaches the blood and that 50% of the endogenous excretion is through the urine. The estimates from the model are compared with the results of measurements on a subject residing in a house with high levels of radon. Whole body radioactivity and excretion data were consistent with the model, but the estimates of exposure (WL) were higher than those measured with an Environmental Working Level Monitor
Two-component mixture cure rate model with spline estimated nonparametric components.
Wang, Lu; Du, Pang; Liang, Hua
2012-09-01
In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.
Fronczak, David L.; Andersen, David E.; Hanna, Everett E.; Cooper, Thomas R.
2015-01-01
Several surveys have documented the increasing population size and geographic distribution of Eastern Population greater sandhill cranes Grus canadensis tabida since the 1960s. Sport hunting of this population of sandhill cranes started in 2012 following the provisions of the Eastern Population Sandhill Crane Management Plan. However, there are currently no published estimates of Eastern Population sandhill crane survival rate that can be used to inform harvest management. As part of two studies of Eastern Population sandhill crane migration, we deployed solar-powered global positioning system platform transmitting terminals on Eastern Population sandhill cranes (n = 42) at key concentration areas from 2009 to 2012. We estimated an annual survival rate for Eastern Population sandhill cranes from data resulting from monitoring these cranes by using the known-fates model in the MARK program. Estimated annual survival rate for adult Eastern Population sandhill cranes was 0.950 (95% confidence interval = 0.885–0.979) during December 2009–August 2014. All fatalities (n = 5) occurred after spring migration in late spring and early summer. We were unable to determine cause of death for crane fatalities in our study. Our survival rate estimate will be useful when combined with other population parameters such as the population index derived from the U.S. Fish and Wildlife Service fall survey, harvest, and recruitment rates to assess the effects of harvest on population size and trend and evaluate the effectiveness of management strategies.
Analytical methods of leakage rate estimation from a containment under a LOCA
International Nuclear Information System (INIS)
Chun, M.H.
1981-01-01
Three most outstanding maximum flow rate formulas are identified from many existing models. Outlines of the three limiting mass flow rate models are given along with computational procedures to estimate approximate amount of fission products released from a containment to environment for a given characteristic hole size for containment-isolation failure and containment pressure and temperature under a loss of coolant accident. Sample calculations are performed using the critical ideal gas flow rate model and the Moody's graphs for the maximum two-phase flow rates, and the results are compared with the values obtained from then mass leakage rate formula of CONTEMPT-LT code for converging nozzle and sonic flow. It is shown that the critical ideal gas flow rate formula gives almost comparable results as one can obtain from the Moody's model. It is also found that a more conservative approach to estimate leakage rate from a containment under a LOCA is to use the maximum ideal gas flow rate equation rather than the mass leakage rate formula of CONTEMPT-LT. (author)
Directory of Open Access Journals (Sweden)
Jeong Wook Lee
2016-12-01
Full Text Available In this paper, we estimate the exchange rate exposure, indicating the effect of exchange rate movements on firm values, for a sample of 1,400 firms in seven East Asian countries. The exposure estimates based on various exchange rate variables, return horizons and a control variable are compared. A key result from our analysis is that the long term effect of exchange rate movements on firm values is greater than the short term effect. And we find very similar results from using other exchange rate variables such as the U.S. dollar exchange rate, etc. Second, we add exchange rate volatility as a control variable and find that the extent of exposure is not much changed. Third, we examine the changes in exposure to exchange rate volatility with an increase in return horizon. Consequently the ratio of firms with significant exposures increases with the return horizons. Interestingly, the increase of exposure with the return horizons is faster for exposure to volatility than for exposure to exchange rate itself. Taken as a whole, our findings suggest that the so-called "exposure puzzle" may be a matter of the methodology used to measure exposure.
Directory of Open Access Journals (Sweden)
Joydeep Roy
2008-06-01
Full Text Available In recent years there has been a renewed interest in understanding the levels and trends in high school graduation in the U.S. A big and influential literature has argued that the “true” high school graduation rate remains at an unsatisfactory level, and that the graduation rates for minorities (Blacks and Hispanics are alarmingly low. In this paper we take a closer look at the different measures of high school graduation which have recently been proposed and which yield such low estimates of graduation rates. We argue that the nature of the variables in the Common Core of Data, the dataset maintained by the U.S. Department of Education that is the main source for all of the new measures, requires caution in calculating graduation rates, and the adjustments that have been proposed often impart significant downward bias to the estimates.
International Nuclear Information System (INIS)
Sasimonton Moungsrijun; Kanitha Srisuksawad; Kosit Lorsirirat; Tuangrak Nantawisarakul
2009-01-01
The Lam Phra Phloeng dam is located in Nakhon Ratchasima province, northeastern of Thailand. Since it was constructed in 1963, the dam is under severe reduction of its water storage capacity caused by deforestation to agricultural land at the upper catchment. Sediment cores were collected using a gravity corer. Sedimentation rates were estimated from the vertical distribution of unsupported Pb-210 in sediment cores. Total Pb-210 was determined by measuring Po-210 activities. The Po-210 and Ra-226 activities were used to determine the rate of sediment by using alpha and gamma spectrometry. The sedimentation rate was estimated using the Constant Initial Concentration model (CIC), the sedimentation rate crest dam 0.265 gcm -2 y -1 and the upstream 0.213 gcm -2 y -1 (Author)
International Nuclear Information System (INIS)
Tateda, Y.; Wattayakorn, G.; Nhan, D.D.; Kasuya, Y.
2004-01-01
Organic carbon balance estimation of mangrove coastal ecosystem is important for understanding of Asian coastal carbon budget/flux calculation in global carbon cycle modelling which is powerful tool for the prediction of future greenhouse gas effect and evaluation of countermeasure preference. Especially, the organic carbon accumulation rate in mangrove ecosystem was reported to be important sink of carbon as well as that in boreal peat accumulation. For the estimation of 10 3 years scale organic carbon accumulation rates in mangrove coastal ecosystems, 14 C was used as long term chronological tracer, being useful in pristine mangrove forest reserve area. While in case of mangrove plantation of in coastal area, the 210 Pb is suitable for the estimation of decades scale estimation by its half-life. Though it has possibility of bio-/physical- turbation effect in applying 210 Pb chronology that is offset in case of 10 3 years scale estimation, especially in Asian mangrove ecosystem where the anthropogenic physical turbation by coastal fishery is vigorous.In this paper, we studied the organic carbon and 210 Pb accumulation rates in subtropical mangrove coastal ecosystems in Japan, Vietnam and Thailand with 7 Be analyses to make sure the negligible effect of above turbation effects on organic carbon accumulation. We finally concluded that 210 Pb was applicable to estimate organic carbon accumulation rates in these ecosystems even though the physical-/bio-turbation is expected. The measured organic carbon accumulation rates using 210 Pb in mangrove coastal ecosystems of Japan, Vietnam and Thailand were 0.067 4.0 t-C ha -1 y -1 . (author)
ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS) SEDIMENT EROSION RATE USING NUMERICAL MODELLING
Kátia Kellem Rosa; Rosemary Vieira; Jefferson Cardia Simões
2013-01-01
Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands). This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the l...
Estimating the Exchange Rate Pass-Through to Prices in Mexico
Josué Fernando Cortés Espada
2013-01-01
This paper estimates the magnitude of the exchange rate pass-through to consumer prices in Mexico. Moreover, it analyzes if the pass-through dynamics have changed in recent years. In particular, it uses a methodology that generates results consistent with the hierarchy implicit in the cpi. The results suggest that the exchange rate pass-through to the general price level is low and not statistically significant. However, the pass-through is positive and significant for goods prices. Furthermo...
An estimation of the exchange rate pass-through to prices in Mexico
Josué Fernando Cortés Espada
2013-01-01
This paper estimates the magnitude of the exchange rate pass-through to consumer prices in Mexico. Moreover, it analyzes if the pass-through dynamics have changed in recent years. In particular, it uses a methodology that generates results consistent with the hierarchy implicit in the CPI. The results suggest that the exchange rate pass-through to the general price level is low and not statistically significant. However, the pass-through is positive and significant for goods prices. Furthermo...
State-Space Dynamic Model for Estimation of Radon Entry Rate, based on Kalman Filtering
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Jílek, K.
2007-01-01
Roč. 98, - (2007), s. 285-297 ISSN 0265-931X Grant - others:GA SÚJB JC_11/2006 Institutional research plan: CEZ:AV0Z10300504 Keywords : air ventilation rate * radon entry rate * state-space modeling * extended Kalman filter * maximum likelihood estimation * prediction error decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.963, year: 2007
A new approach to estimate ice dynamic rates using satellite observations in East Antarctica
Directory of Open Access Journals (Sweden)
B. Kallenberg
2017-05-01
Full Text Available Mass balance changes of the Antarctic ice sheet are of significant interest due to its sensitivity to climatic changes and its contribution to changes in global sea level. While regional climate models successfully estimate mass input due to snowfall, it remains difficult to estimate the amount of mass loss due to ice dynamic processes. It has often been assumed that changes in ice dynamic rates only need to be considered when assessing long-term ice sheet mass balance; however, 2 decades of satellite altimetry observations reveal that the Antarctic ice sheet changes unexpectedly and much more dynamically than previously expected. Despite available estimates on ice dynamic rates obtained from radar altimetry, information about ice sheet changes due to changes in the ice dynamics are still limited, especially in East Antarctica. Without understanding ice dynamic rates, it is not possible to properly assess changes in ice sheet mass balance and surface elevation or to develop ice sheet models. In this study we investigate the possibility of estimating ice sheet changes due to ice dynamic rates by removing modelled rates of surface mass balance, firn compaction, and bedrock uplift from satellite altimetry and gravity observations. With similar rates of ice discharge acquired from two different satellite missions we show that it is possible to obtain an approximation of the rate of change due to ice dynamics by combining altimetry and gravity observations. Thus, surface elevation changes due to surface mass balance, firn compaction, and ice dynamic rates can be modelled and correlated with observed elevation changes from satellite altimetry.
Peña, Carlos; Espeland, Marianne
2015-01-01
The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC) is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE) and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution. PMID:25830910
Directory of Open Access Journals (Sweden)
Carlos Peña
Full Text Available The species rich butterfly family Nymphalidae has been used to study evolutionary interactions between plants and insects. Theories of insect-hostplant dynamics predict accelerated diversification due to key innovations. In evolutionary biology, analysis of maximum credibility trees in the software MEDUSA (modelling evolutionary diversity using stepwise AIC is a popular method for estimation of shifts in diversification rates. We investigated whether phylogenetic uncertainty can produce different results by extending the method across a random sample of trees from the posterior distribution of a Bayesian run. Using the MultiMEDUSA approach, we found that phylogenetic uncertainty greatly affects diversification rate estimates. Different trees produced diversification rates ranging from high values to almost zero for the same clade, and both significant rate increase and decrease in some clades. Only four out of 18 significant shifts found on the maximum clade credibility tree were consistent across most of the sampled trees. Among these, we found accelerated diversification for Ithomiini butterflies. We used the binary speciation and extinction model (BiSSE and found that a hostplant shift to Solanaceae is correlated with increased net diversification rates in Ithomiini, congruent with the diffuse cospeciation hypothesis. Our results show that taking phylogenetic uncertainty into account when estimating net diversification rate shifts is of great importance, as very different results can be obtained when using the maximum clade credibility tree and other trees from the posterior distribution.
Directory of Open Access Journals (Sweden)
R. Cai
2017-10-01
Full Text Available A new balance formula to estimate new particle formation rate is proposed. It is derived from the aerosol general dynamic equation in the discrete form and then converted into an approximately continuous form for analyzing data from new particle formation (NPF field campaigns. The new formula corrects the underestimation of the coagulation scavenging effect that occurred in the previously used formulae. It also clarifies the criteria for determining the upper size bound in measured aerosol size distributions for estimating new particle formation rate. An NPF field campaign was carried out from 7 March to 7 April 2016 in urban Beijing, and a diethylene glycol scanning mobility particle spectrometer equipped with a miniature cylindrical differential mobility analyzer was used to measure aerosol size distributions down to ∼ 1 nm. Eleven typical NPF events were observed during this period. Measured aerosol size distributions from 1 nm to 10 µm were used to test the new formula and the formulae widely used in the literature. The previously used formulae that perform well in a relatively clean atmosphere in which nucleation intensity is not strong were found to underestimate the comparatively high new particle formation rate in urban Beijing because of their underestimation or neglect of the coagulation scavenging effect. The coagulation sink term is the governing component of the estimated formation rate in the observed NPF events in Beijing, and coagulation among newly formed particles contributes a large fraction to the coagulation sink term. Previously reported formation rates in Beijing and in other locations with intense NPF events might be underestimated because the coagulation scavenging effect was not fully considered; e.g., estimated formation rates of 1.5 nm particles in this campaign using the new formula are 1.3–4.3 times those estimated using the formula neglecting coagulation among particles in the nucleation mode.
A small-scale eruption leading to a blowout macrospicule jet in an on-disk coronal hole
International Nuclear Information System (INIS)
Adams, Mitzi; Sterling, Alphonse C.; Moore, Ronald L.; Gary, G. Allen
2014-01-01
We examine the three-dimensional magnetic structure and dynamics of a solar EUV-macrospicule jet that occurred on 2011 February 27 in an on-disk coronal hole. The observations are from the Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA) and the SDO Helioseismic and Magnetic Imager (HMI). The observations reveal that in this event, closed-field-carrying cool absorbing plasma, as in an erupting mini-filament, erupted and opened, forming a blowout jet. Contrary to some jet models, there was no substantial recently emerged, closed, bipolar-magnetic field in the base of the jet. Instead, over several hours, flux convergence and cancellation at the polarity inversion line inside an evolved arcade in the base apparently destabilized the entire arcade, including its cool-plasma-carrying core field, to undergo a blowout eruption in the manner of many standard-sized, arcade-blowout eruptions that produce a flare and coronal mass ejection. Internal reconnection made bright 'flare' loops over the polarity inversion line inside the blowing-out arcade field, and external reconnection of the blowing-out arcade field with an ambient open field made longer and dimmer EUV loops on the outside of the blowing-out arcade. That the loops made by the external reconnection were much larger than the loops made by the internal reconnection makes this event a new variety of blowout jet, a variety not recognized in previous observations and models of blowout jets.
A small-scale eruption leading to a blowout macrospicule jet in an on-disk coronal hole
Energy Technology Data Exchange (ETDEWEB)
Adams, Mitzi; Sterling, Alphonse C.; Moore, Ronald L. [Space Science Office, ZP13, NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Gary, G. Allen, E-mail: mitzi.adams@nasa.gov, E-mail: alphonse.sterling@nasa.gov, E-mail: ron.moore@nasa.gov, E-mail: gag0002@uah.edu [Center for Space Plasma and Aeronomic Research, The University of Alabama in Huntsville, Huntsville, AL 35805, USA. (United States)
2014-03-01
We examine the three-dimensional magnetic structure and dynamics of a solar EUV-macrospicule jet that occurred on 2011 February 27 in an on-disk coronal hole. The observations are from the Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA) and the SDO Helioseismic and Magnetic Imager (HMI). The observations reveal that in this event, closed-field-carrying cool absorbing plasma, as in an erupting mini-filament, erupted and opened, forming a blowout jet. Contrary to some jet models, there was no substantial recently emerged, closed, bipolar-magnetic field in the base of the jet. Instead, over several hours, flux convergence and cancellation at the polarity inversion line inside an evolved arcade in the base apparently destabilized the entire arcade, including its cool-plasma-carrying core field, to undergo a blowout eruption in the manner of many standard-sized, arcade-blowout eruptions that produce a flare and coronal mass ejection. Internal reconnection made bright 'flare' loops over the polarity inversion line inside the blowing-out arcade field, and external reconnection of the blowing-out arcade field with an ambient open field made longer and dimmer EUV loops on the outside of the blowing-out arcade. That the loops made by the external reconnection were much larger than the loops made by the internal reconnection makes this event a new variety of blowout jet, a variety not recognized in previous observations and models of blowout jets.
Estimation of leak rate through circumferential cracks in pipes in nuclear power plants
Directory of Open Access Journals (Sweden)
Jai Hak Park
2015-04-01
Full Text Available The leak before break (LBB concept is widely used in designing pipe lines in nuclear power plants. According to the concept, the amount of leaking liquid from a pipe should be more than the minimum detectable leak rate of a leak detection system before catastrophic failure occurs. Therefore, accurate estimation of the leak rate is important to evaluate the validity of the LBB concept in pipe line design. In this paper, a program was developed to estimate the leak rate through circumferential cracks in pipes in nuclear power plants using the Henry–Fauske flow model and modified Henry–Fauske flow model. By using the developed program, the leak rate was calculated for a circumferential crack in a sample pipe, and the effect of the flow model on the leak rate was examined. Treating the crack morphology parameters as random variables, the statistical behavior of the leak rate was also examined. As a result, it was found that the crack morphology parameters have a strong effect on the leak rate and the statistical behavior of the leak rate can be simulated using normally distributed crack morphology parameters.
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
Estimation of construction and demolition waste using waste generation rates in Chennai, India.
Ram, V G; Kalidindi, Satyanarayana N
2017-06-01
A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai
2014-01-01
be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long
Optimization of hierarchical 3DRS motion estimators for picture rate conversion
Heinrich, A.; Bartels, C.L.L.; Vleuten, van der R.J.; Cordes, C.N.; Haan, de G.
2010-01-01
There is a continuous pressure to lower the implementation complexity and improve the quality of motion-compensated picture rate conversion methods. Since the concept of hierarchy can be advantageously applied to many motion estimation methods, we have extended and improved the current
An expert system for estimating production rates and costs for hardwood group-selection harvests
Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba
2003-01-01
As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...
estimated glomerular filtration rate and risk of survival in acute stroke
African Journals Online (AJOL)
2014-03-03
Mar 3, 2014 ... ESTIMATED GLOMERULAR FILTRATION RATE AND RISK OF SURVIVAL IN ACUTE STROKE. E. I. Okaka, MBBS, FWACP, F. A. Imarhiagbe, MBChB, FMCP, F. E. Odiase, MBBS, FMCP, O. C. A. Okoye, MBBS, FWACP,. Department of Medicine, University of Benin Teaching Hospital, Benin City, Nigeria.
Brownscombe, J W; Lennox, R J; Danylchuk, A J; Cooke, S J
2018-06-21
Accelerometry is growing in popularity for remotely measuring fish swimming metrics, but appropriate sampling frequencies for accurately measuring these metrics are not well studied. This research examined the influence of sampling frequency (1-25 Hz) with tri-axial accelerometer biologgers on estimates of overall dynamic body acceleration (ODBA), tail-beat frequency, swimming speed and metabolic rate of bonefish Albula vulpes in a swim-tunnel respirometer and free-swimming in a wetland mesocosm. In the swim tunnel, sampling frequencies of ≥ 5 Hz were sufficient to establish strong relationships between ODBA, swimming speed and metabolic rate. However, in free-swimming bonefish, estimates of metabolic rate were more variable below 10 Hz. Sampling frequencies should be at least twice the maximum tail-beat frequency to estimate this metric effectively, which is generally higher than those required to estimate ODBA, swimming speed and metabolic rate. While optimal sampling frequency probably varies among species due to tail-beat frequency and swimming style, this study provides a reference point with a medium body-sized sub-carangiform teleost fish, enabling researchers to measure these metrics effectively and maximize study duration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Estimation of shutdown heat generation rates in GHARR-1 due to ...
African Journals Online (AJOL)
Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...
Estimates of Annual Soil Loss Rates in the State of São Paulo, Brazil
Directory of Open Access Journals (Sweden)
Grasiela de Oliveira Rodrigues Medeiros
Full Text Available ABSTRACT: Soil is a natural resource that has been affected by human pressures beyond its renewal capacity. For this reason, large agricultural areas that were productive have been abandoned due to soil degradation, mainly caused by the erosion process. The objective of this study was to apply the Universal Soil Loss Equation to generate more recent estimates of soil loss rates for the state of São Paulo using a database with information from medium resolution (30 m. The results showed that many areas of the state have high (critical levels of soil degradation due to the predominance of consolidated human activities, especially in growing sugarcane and pasture use. The average estimated rate of soil loss is 30 Mg ha-1 yr-1 and 59 % of the area of the state (except for water bodies and urban areas had estimated rates above 12 Mg ha-1 yr-1, considered as the average tolerance limit in the literature. The average rates of soil loss in areas with annual agricultural crops, semi-perennial agricultural crops (sugarcane, and permanent agricultural crops were 118, 78, and 38 Mg ha-1 yr-1 respectively. The state of São Paulo requires attention to conservation of soil resources, since most soils led to estimates beyond the tolerance limit.
ESTIMATING SPECIATION AND EXTINCTION RATES FROM DIVERSITY DATA AND THE FOSSIL RECORD
Etienne, Rampal S.; Apol, M. Emile F.
Understanding the processes that underlie biodiversity requires insight into the evolutionary history of the taxa involved. Accurate estimation of speciation, extinction, and diversification rates is a prerequisite for gaining this insight. Here, we develop a stochastic birth-death model of
Neural estimation of kinetic rate constants from dynamic PET-scans
DEFF Research Database (Denmark)
Fog, Torben L.; Nielsen, Lars Hupfeldt; Hansen, Lars Kai
1994-01-01
A feedforward neural net is trained to invert a simple three compartment model describing the tracer kinetics involved in the metabolism of [18F]fluorodeoxyglucose in the human brain. The network can estimate rate constants from positron emission tomography sequences and is about 50 times faster ...
Estimation of Oil Production Rates in Reservoirs Exposed to Focused Vibrational Energy
Jeong, Chanseok; Kallivokas, Loukas F.; Huh, Chun; Lake, Larry W.
2014-01-01
the production rate of remaining oil from existing oil fields. To date, there are few theoretical studies on estimating how much bypassed oil within an oil reservoir could be mobilized by such vibrational stimulation. To fill this gap, this paper presents a
Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model
International Nuclear Information System (INIS)
Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.
2012-01-01
The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.
International Nuclear Information System (INIS)
Telegadas, K.
1979-01-01
A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)
Estimating the annotation error rate of curated GO database sequence annotations
Directory of Open Access Journals (Sweden)
Brown Alfred L
2007-05-01
Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.
Comparing avian and bat fatality rate estimates among North American wind energy projects
Energy Technology Data Exchange (ETDEWEB)
Smallwood, Shawn
2011-07-01
Full text: Wind energy development has expanded rapidly, and so have concerns over bird and bat impacts caused by wind turbines. To assess and compare impacts due to collisions, investigators use a common metric, fatalities/MW/year, but estimates of fatality rates have come from various wind turbine models, tower heights, environments, fatality search methods, and analytical methods. To improve comparability and asses large-scale impacts, I applied a common set of assumptions and methods to data in fatality monitoring reports to estimate fatality rates of birds and bats at 71 wind projects across North America (52 outside the Altamont Pass Wind Resource Area, APWRA). The data were from wind turbines of 27 sizes (range 0.04-3.00 MW) and 28 tower heights (range 18.5-90 m), and searched at 40 periodic intervals (range 1-90 days) and out to 20 distances from turbines (range 30-126 m). Estimates spanned the years 1982 to 2010, and involved 1-1,345 turbines per unique combination of project, turbine size, tower height, and search methodology. I adjusted fatality rates for search detection rates averaged from 425 detection trials, and for scavenger removal rates based on 413 removal trials. I also adjusted fatality rates for turbine tower height and maximum search radius, based on logistic functions fit to cumulative counts of carcasses that were detected at 1-m distance intervals from the turbine. For each tower height, I estimated the distance at which cumulative carcass counts reached an asymptote, and for each project I calculated the proportion of fatalities likely not found due to the maximum search radius being short of the model-predicted distance asymptote. I used the same estimator in all cases. I estimated mean fatalities/MW/year among North American wind projects at 12.6 bats (80% CI: 8.1-17.1) and 11.1 birds (80% CI: 9.5-12.7), including 1.6 raptors (80% CI: 1.3-2.0), and excluding the Altamont Pass I estimated fatality rates at 17.2 bats (80% CI: 9
Harman, Richard R.
2006-01-01
The advantages of inducing a constant spin rate on a spacecraft are well known. A variety of science missions have used this technique as a relatively low cost method for conducting science. Starting in the late 1970s, NASA focused on building spacecraft using 3-axis control as opposed to the single-axis control mentioned above. Considerable effort was expended toward sensor and control system development, as well as the development of ground systems to independently process the data. As a result, spinning spacecraft development and their resulting ground system development stagnated. In the 1990s, shrinking budgets made spinning spacecraft an attractive option for science. The attitude requirements for recent spinning spacecraft are more stringent and the ground systems must be enhanced in order to provide the necessary attitude estimation accuracy. Since spinning spacecraft (SC) typically have no gyroscopes for measuring attitude rate, any new estimator would need to rely on the spacecraft dynamics equations. One estimation technique that utilized the SC dynamics and has been used successfully in 3-axis gyro-less spacecraft ground systems is the pseudo-linear Kalman filter algorithm. Consequently, a pseudo-linear Kalman filter has been developed which directly estimates the spacecraft attitude quaternion and rate for a spinning SC. Recently, a filter using Markley variables was developed specifically for spinning spacecraft. The pseudo-linear Kalman filter has the advantage of being easier to implement but estimates the quaternion which, due to the relatively high spinning rate, changes rapidly for a spinning spacecraft. The Markley variable filter is more complicated to implement but, being based on the SC angular momentum, estimates parameters which vary slowly. This paper presents a comparison of the performance of these two filters. Monte-Carlo simulation runs will be presented which demonstrate the advantages and disadvantages of both filters.
Direct estimates of unemployment rate and capacity utilization in macroeconometric models
Energy Technology Data Exchange (ETDEWEB)
Klein, L R [Univ. of Pennsylvania, Philadelphia; Su, V
1979-10-01
The problem of measuring resource-capacity utilization as a factor in overall economic efficiency is examined, and a tentative solution is offered. A macro-econometric model is applied to the aggregate production function by linking unemployment rate and capacity utilization rate. Partial- and full-model simulations use Wharton indices as a filter and produce direct estimates of unemployment rates. The simulation paths of durable-goods industries, which are more capital-intensive, are found to be more sensitive to business cycles than the nondurable-goods industries. 11 references.
Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies
Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba
2018-05-01
photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.
Using ²¹⁰Pb measurements to estimate sedimentation rates on river floodplains.
Du, P; Walling, D E
2012-01-01
Growing interest in the dynamics of floodplain evolution and the important role of overbank sedimentation on river floodplains as a sediment sink has focused attention on the need to document contemporary and recent rates of overbank sedimentation. The potential for using the fallout radionuclides ¹³⁷Cs and excess ²¹⁰Pb to estimate medium-term (10-10² years) sedimentation rates on river floodplains has attracted increasing attention. Most studies that have successfully used fallout radionuclides for this purpose have focused on the use of ¹³⁷Cs. However, the use of excess ²¹⁰Pb potentially offers a number of advantages over ¹³⁷Cs measurements. Most existing investigations that have used excess ²¹⁰Pb measurements to document sedimentation rates have, however, focused on lakes rather than floodplains and the transfer of the approach, and particularly the models used to estimate the sedimentation rate, to river floodplains involves a number of uncertainties, which require further attention. This contribution reports the results of an investigation of overbank sedimentation rates on the floodplains of several UK rivers. Sediment cores were collected from seven floodplain sites representative of different environmental conditions and located in different areas of England and Wales. Measurements of excess ²¹⁰Pb and ¹³⁷Cs were made on these cores. The ²¹⁰Pb measurements have been used to estimate sedimentation rates and the results obtained by using different models have been compared. The ¹³⁷Cs measurements have also been used to provide an essentially independent time marker for validation purposes. In using the ²¹⁰Pb measurements, particular attention was directed to the problem of obtaining reliable estimates of the supported and excess or unsupported components of the total ²¹⁰Pb activity of sediment samples. Although there was a reasonable degree of consistency between the estimates of sedimentation rate provided by
Estimation of the prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia
Meza, Benjamin P.L.; Lohrke, Britta; Wilkinson, Robert; Pitman, John P.; Shiraishi, Ray W.; Bock, Naomi; Lowrance, David W.; Kuehnert, Matthew J.; Mataranyika, Mary; Basavaraju, Sridhar V.
2014-01-01
Background Acute transfusion reactions are probably common in sub-Saharan Africa, but transfusion reaction surveillance systems have not been widely established. In 2008, the Blood Transfusion Service of Namibia implemented a national acute transfusion reaction surveillance system, but substantial under-reporting was suspected. We estimated the actual prevalence and rate of acute transfusion reactions occurring in Windhoek, Namibia. Methods The percentage of transfusion events resulting in a reported acute transfusion reaction was calculated. Actual percentage and rates of acute transfusion reactions per 1,000 transfused units were estimated by reviewing patients’ records from six hospitals, which transfuse >99% of all blood in Windhoek. Patients’ records for 1,162 transfusion events occurring between 1st January – 31st December 2011 were randomly selected. Clinical and demographic information were abstracted and Centers for Disease Control and Prevention National Healthcare Safety Network criteria were applied to categorize acute transfusion reactions1. Results From January 1 – December 31, 2011, there were 3,697 transfusion events (involving 10,338 blood units) in the selected hospitals. Eight (0.2%) acute transfusion reactions were reported to the surveillance system. Of the 1,162 transfusion events selected, medical records for 785 transfusion events were analysed, and 28 acute transfusion reactions were detected, of which only one had also been reported to the surveillance system. An estimated 3.4% (95% confidence interval [CI]: 2.3–4.4) of transfusion events in Windhoek resulted in an acute transfusion reaction, with an estimated rate of 11.5 (95% CI: 7.6–14.5) acute transfusion reactions per 1,000 transfused units. Conclusion The estimated actual rate of acute transfusion reactions is higher than the rate reported to the national haemovigilance system. Improved surveillance and interventions to reduce transfusion-related morbidity and mortality
Estimation of time-varying growth, uptake and excretion rates from dynamic metabolomics data.
Cinquemani, Eugenio; Laroute, Valérie; Cocaign-Bousquet, Muriel; de Jong, Hidde; Ropers, Delphine
2017-07-15
Technological advances in metabolomics have made it possible to monitor the concentration of extracellular metabolites over time. From these data, it is possible to compute the rates of uptake and excretion of the metabolites by a growing cell population, providing precious information on the functioning of intracellular metabolism. The computation of the rate of these exchange reactions, however, is difficult to achieve in practice for a number of reasons, notably noisy measurements, correlations between the concentration profiles of the different extracellular metabolites, and discontinuties in the profiles due to sudden changes in metabolic regime. We present a method for precisely estimating time-varying uptake and excretion rates from time-series measurements of extracellular metabolite concentrations, specifically addressing all of the above issues. The estimation problem is formulated in a regularized Bayesian framework and solved by a combination of extended Kalman filtering and smoothing. The method is shown to improve upon methods based on spline smoothing of the data. Moreover, when applied to two actual datasets, the method recovers known features of overflow metabolism in Escherichia coli and Lactococcus lactis , and provides evidence for acetate uptake by L. lactis after glucose exhaustion. The results raise interesting perspectives for further work on rate estimation from measurements of intracellular metabolites. The Matlab code for the estimation method is available for download at https://team.inria.fr/ibis/rate-estimation-software/ , together with the datasets. eugenio.cinquemani@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Improved estimates of external gamma dose rates in the environs of Hinkley Point Power Station
International Nuclear Information System (INIS)
Macdonald, H.F.; Thompson, I.M.G.
1988-07-01
The dominant source of external gamma dose rates at centres of population within a few kilometres of Hinkley Point Power Station is the routine discharge of 41-Ar from the 'A' station magnox reactors. Earlier estimates of the 41-Ar radiation dose rates were based upon measured discharge rates, combined with calculations using standard plume dispersion and cloud-gamma integration models. This report presents improved dose estimates derived from environmental gamma dose rate measurements made at distances up to about 1 km from the site, thus minimising the degree of extrapolation introduced in estimating dose rates at locations up to a few kilometres from the site. In addition, results from associated chemical tracer measurements and wind tunnel simulations covering distances up to about 4 km from the station are outlined. These provide information on the spatial distribution of the 41-Ar plume during the initial stages of its dispersion, including effects due to plume buoyancy and momentum and behaviour under light wind conditions. In addition to supporting the methodology used for the 41-Ar dose calculations, this information is also of generic interest in the treatment of a range of operational and accidental releases from nuclear power station sites and will assist in the development and validation of existing environmental models. (author)
Estimation of component failure rates for PSA on nuclear power plants 1982-1997
International Nuclear Information System (INIS)
Kirimoto, Yukihiro; Matsuzaki, Akihiro; Sasaki, Atsushi
2001-01-01
Probabilistic safety assessment (PSA) on nuclear power plants has been studied for many years by the Japanese industry. The PSA methodology has been improved so that PSAs for all commercial LWRs were performed and used to examine for accident management.On the other hand, most data of component failure rates in these PSAs were acquired from U.S. databases. Nuclear Information Center (NIC) of Central Research Institute of Electric Power Industry (CRIEPI) serves utilities by providing safety- , and reliability-related information on operation and maintenance of the nuclear power plants, and by evaluating the plant performance and incident trends. So, NIC started a research study on estimating the major component failure rates at the request of the utilities in 1988. As a result, we estimated the hourly-failure rates of 47 component types and the demand-failure rates of 15 component types. The set of domestic component reliability data from 1982 to 1991 for 34 LWRs has been evaluated by a group of PSA experts in Japan at the Nuclear Safety Research Association (NSRA) in 1995 and 1996, and the evaluation report was issued in March 1997. This document describes the revised component failure rate calculated by our re-estimation on 49 Japanese LWRs from 1982 to 1997. (author)
Directory of Open Access Journals (Sweden)
Timothy L. Vaughn
2017-11-01
Full Text Available Coordinated dual-tracer, aircraft-based, and direct component-level measurements were made at midstream natural gas gathering and boosting stations in the Fayetteville shale (Arkansas, USA. On-site component-level measurements were combined with engineering estimates to generate comprehensive facility-level methane emission rate estimates (“study on-site estimates (SOE” comparable to tracer and aircraft measurements. Combustion slip (unburned fuel entrained in compressor engine exhaust, which was calculated based on 111 recent measurements of representative compressor engines, accounts for an estimated 75% of cumulative SOEs at gathering stations included in comparisons. Measured methane emissions from regenerator vents on glycol dehydrator units were substantially larger than predicted by modelling software; the contribution of dehydrator regenerator vents to the cumulative SOE would increase from 1% to 10% if based on direct measurements. Concurrent measurements at 14 normally-operating facilities show relative agreement between tracer and SOE, but indicate that tracer measurements estimate lower emissions (regression of tracer to SOE = 0.91 (95% CI = 0.83–0.99, R2 = 0.89. Tracer and SOE 95% confidence intervals overlap at 11/14 facilities. Contemporaneous measurements at six facilities suggest that aircraft measurements estimate higher emissions than SOE. Aircraft and study on-site estimate 95% confidence intervals overlap at 3/6 facilities. The average facility level emission rate (FLER estimated by tracer measurements in this study is 17–73% higher than a prior national study by Marchese et al.
ESTIMATION OF THE WANDA GLACIER (SOUTH SHETLANDS SEDIMENT EROSION RATE USING NUMERICAL MODELLING
Directory of Open Access Journals (Sweden)
Kátia Kellem Rosa
2013-09-01
Full Text Available Glacial sediment yield results from glacial erosion and is influenced by several factors including glacial retreat rate, ice flow velocity and thermal regime. This paper estimates the contemporary subglacial erosion rate and sediment yield of Wanda Glacier (King George Island, South Shetlands. This work also examines basal sediment evacuation mechanisms by runoff and glacial erosion processes during the subglacial transport. This is small temperate glacier that has seen retreating for the last decades. In this work, we examine basal sediment evacuation mechanisms by runoff and analyze glacial erosion processes occurring during subglacial transport. The glacial erosion rate at Wanda Glacier, estimated using a numerical model that consider sediment evacuated to outlet streams, ice flow velocity, ice thickness and glacier area, is 1.1 ton m yr-1.
Directory of Open Access Journals (Sweden)
Nicholas Bardsley
Full Text Available Consumption surveys often record zero purchases of a good because of a short observation window. Measures of distribution are then precluded and only mean consumption rates can be inferred. We show that Propensity Score Matching can be applied to recover the distribution of consumption rates. We demonstrate the method using the UK National Travel Survey, in which c.40% of motorist households purchase no fuel. Estimated consumption rates are plausible judging by households' annual mileages, and highly skewed. We apply the same approach to estimate CO2 emissions and outcomes of a carbon cap or tax. Reliance on means apparently distorts analysis of such policies because of skewness of the underlying distributions. The regressiveness of a simple tax or cap is overstated, and redistributive features of a revenue-neutral policy are understated.
Soft error rate estimations of the Kintex-7 FPGA within the ATLAS Liquid Argon (LAr) Calorimeter
International Nuclear Information System (INIS)
Wirthlin, M J; Harding, A; Takai, H
2014-01-01
This paper summarizes the radiation testing performed on the Xilinx Kintex-7 FPGA in an effort to determine if the Kintex-7 can be used within the ATLAS Liquid Argon (LAr) Calorimeter. The Kintex-7 device was tested with wide-spectrum neutrons, protons, heavy-ions, and mixed high-energy hadron environments. The results of these tests were used to estimate the configuration ram and block ram upset rate within the ATLAS LAr. These estimations suggest that the configuration memory will upset at a rate of 1.1 × 10 −10 upsets/bit/s and the bram memory will upset at a rate of 9.06 × 10 −11 upsets/bit/s. For the Kintex 7K325 device, this translates to 6.85 × 10 −3 upsets/device/s for configuration memory and 1.49 × 10 −3 for block memory
Akkermans, Simen; Logist, Filip; Van Impe, Jan F
2018-04-01
When building models to describe the effect of environmental conditions on the microbial growth rate, parameter estimations can be performed either with a one-step method, i.e., directly on the cell density measurements, or in a two-step method, i.e., via the estimated growth rates. The two-step method is often preferred due to its simplicity. The current research demonstrates that the two-step method is, however, only valid if the correct data transformation is applied and a strict experimental protocol is followed for all experiments. Based on a simulation study and a mathematical derivation, it was demonstrated that the logarithm of the growth rate should be used as a variance stabilizing transformation. Moreover, the one-step method leads to a more accurate estimation of the model parameters and a better approximation of the confidence intervals on the estimated parameters. Therefore, the one-step method is preferred and the two-step method should be avoided. Copyright © 2017. Published by Elsevier Ltd.
Karakas, Filiz; Imamoglu, Ipek
2017-04-01
This study aims to estimate anaerobic debromination rate constants (k m ) of PBDE pathways using previously reported laboratory soil data. k m values of pathways are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model. Debromination activities published in the literature in terms of bromine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The range of estimated k m values is between 0.0003 and 0.0241 d -1 . The median and maximum of k m values are found to be comparable to the few available biologically confirmed rate constants published in the literature. The estimated k m values can be used as input to numerical fate and transport models for a better and more detailed investigation of the fate of individual PBDEs in contaminated sediments. Various remediation scenarios such as monitored natural attenuation or bioremediation with bioaugmentation can be handled in a more quantitative manner with the help of k m estimated in this study.
A constrained polynomial regression procedure for estimating the local False Discovery Rate
Directory of Open Access Journals (Sweden)
Broët Philippe
2007-06-01
Full Text Available Abstract Background In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR, which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR for subsets of genes. Results In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets. Conclusion A novel and efficient procedure for estimating lFDR was developed and evaluated.
Fetal QRS detection and heart rate estimation: a wavelet-based approach
International Nuclear Information System (INIS)
Almeida, Rute; Rocha, Ana Paula; Gonçalves, Hernâni; Bernardes, João
2014-01-01
Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR. (paper)
Supporting information for the estimation of plutonium oxide leak rates through very small apertures
International Nuclear Information System (INIS)
Schwendiman, L.C.
1977-01-01
Information is presented from which an estimate can be made of the release of plutonium oxide from shipping containers. The leak diameter is estimated from gas leak tests of the container and an estimate is made of gas leak rate as a function of pressure over the time of interest in the accident. These calculations are limited in accuracy because of assumptions regarding leak geometry and the basic formulations of hydrodynamic flow for the assumed conditions. Sonic flow is assumed to be the limiting gas flow rate. Particles leaking from the air space above the powder will be limited by the low availability of particles due to rapid settling, the very limited driving force (pressure buildup) during the first minute, and the deposition in the leak channel. Equations are given to estimate deposition losses. Leaks of particles occurring below the level of the bulk powder will be limited by mechanical interference when leaks are of dimension smaller than particle sizes present. Some limiting cases can be calculated. When the leak dimension is large compared to the particle sizes present, maximum particle releases can be estimated, but will be very conservative
Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Nihar Jindal
2008-03-01
Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
Estimating an exchange rate between the EQ-5D-3L and ASCOT.
Stevens, Katherine; Brazier, John; Rowen, Donna
2018-06-01
The aim was to estimate an exchange rate between EQ-5D-3L and the Adult Social Care Outcome Tool (ASCOT) using preference-based mapping via common time trade-off (TTO) valuations. EQ-5D and ASCOT are useful for examining cost-effectiveness within the health and social care sectors, respectively, but there is a policy need to understand overall benefits and compare across sectors to assess relative value for money. Standard statistical mapping is unsuitable since it relies on conceptual overlap of the measures but EQ-5D and ASCOT have different conceptualisations of quality of life. We use a preference-based mapping approach to estimate the exchange rate using common TTO valuations for both measures. A sample of health states from each measure was valued using TTO by 200 members of the UK adult general population. Regression analyses are used to generate separate equations between EQ-5D-3L and ASCOT values using their original value set and TTO values elicited here. These are solved as simultaneous equations to estimate the relationship between EQ-5D-3L and ASCOT. The relationship for moving from ASCOT to EQ-5D-3L is a linear transformation with an intercept of -0.0488 and gradient of 0.978. This enables QALY gains generated by ASCOT and EQ-5D to be compared across different interventions. This paper estimated an exchange rate between ASCOT and EQ-5D-3L using a preference-based mapping approach that does not compromise the descriptive systems of the two measures. This contributes to the development of preference-based mapping through the use of TTO as the common metric used to estimate the exchange rate between measures.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene
2016-04-30
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.
Estimation of build up of dose rate on U3O8 product drum
International Nuclear Information System (INIS)
Pandey, J.P.N.; Shinde, A.M.; Deshpande, M.D.
2008-01-01
In fuel reprocessing plant, plutonium oxide and uranium oxide (U 3 O 8 ) are products. Approximately 180 kg U 3 O 8 is filled in SS drum and sealed firmly before storage. In PHWR natural uranium (UO 2 ) is used as fuel. In natural uranium, thorium-232 is present as an impurity at few tens of ppm level. During irradiation in power reactors, due to nuclear reaction formation of 232 U from 232 Th takes place. Natural decay of 232 U leads to the formation of 208 Tl. As time passes, there is buildup of 208 Tl and hence increase in dose rate on the drum containing U 3 O 8 . It is essential to estimate the buildup of dose rate considering the external radiological hazards involved during U 3 O 8 drum handling, transportation and fuel fabrication. This paper describes the calculation of dose rate on drum in future years using MCNP code. For dose rate calculation decay of fission product activity which remains as contamination in product and build up of '2 08 Tl from 232 U is considered. Some measured values of dose rate on U 3 O 8 drum are given for the comparisons with estimated dose rate based on MCNP code. (author)
Rate of formation of neutron stars in the galaxy estimated from stellar statistics
International Nuclear Information System (INIS)
Endal, A.S.
1979-01-01
Stellar statistics and stellar evolution models can be used to estimate the rate of formation of neutron stars in the Galaxy. A recent analysis by Hills suggests that the mean interval between neutron-star births is greater than 27 years. This is incompatible with estimates based on pulsar statistics. However, a closer examination of the stellar data shows that Hill's result is incorrect. A mean interval between neutron-star births as short as 4 years is consistent with (though certainly not required by) stellar evolution theory
Glomerular Filtration Rate Estimation in Renal and Non-Renal Solid Organ Transplantation
DEFF Research Database (Denmark)
Hornum, Mads; Feldt-Rasmussen, Bo
2017-01-01
Following transplantation (TX) of both renal and non-renal organs, a large proportion of patients have renal dysfunction. There are multiple causes for this. Chronic nephrotoxicity and high doses of calcineurin inhibitors are important factors. Preoperative and perioperative factors like...... or estimates of renal function in these patients, in order to accurately and safely dose immunosuppressive medication and perform and adjust the treatment and prophylaxis of renal dysfunction. This is a short overview and discussion of relevant studies and possible caveats of estimated glomerular filtration...... rate methods for use in renal and non-renal TX....
Estimation of permafrost thawing rates in a sub-arctic catchment using recession flow analysis
Directory of Open Access Journals (Sweden)
S. W. Lyon
2009-05-01
Full Text Available Permafrost thawing is likely to change the flow pathways taken by water as it moves through arctic and sub-arctic landscapes. The location and distribution of these pathways directly influence the carbon and other biogeochemical cycling in northern latitude catchments. While permafrost thawing due to climate change has been observed in the arctic and sub-arctic, direct observations of permafrost depth are difficult to perform at scales larger than a local scale. Using recession flow analysis, it may be possible to detect and estimate the rate of permafrost thawing based on a long-term streamflow record. We demonstrate the application of this approach to the sub-arctic Abiskojokken catchment in northern Sweden. Based on recession flow analysis, we estimate that permafrost in this catchment may be thawing at an average rate of about 0.9 cm/yr during the past 90 years. This estimated thawing rate is consistent with direct observations of permafrost thawing rates, ranging from 0.7 to 1.3 cm/yr over the past 30 years in the region.
Servais, P
1995-03-01
In aquatic ecosystems, [(3)H]thymidine incorporation into bacterial DNA and [(3)H]leucine incorporation into proteins are usually used to estimate bacterial production. The incorporation rates of four amino acids (leucine, tyrosine, lysine, alanine) into proteins of bacteria were measured in parallel on natural freshwater samples from the basin of the river Meuse (Belgium). Comparison of the incorporation into proteins and into the total macromolecular fraction showed that these different amino acids were incorporated at more than 90% into proteins. From incorporation measurements at four subsaturated concentrations (range, 2-77 nm), the maximum incorporation rates were determined. Strong correlations (r > 0.91 for all the calculated correlations) were found between the maximum incorporation rates of the different tested amino acids over a range of two orders of magnitude of bacterial activity. Bacterial production estimates were calculated using theoretical and experimental conversion factors. The productions calculated from the incorporation rates of the four amino acids were in good concordance, especially when the experimental conversion factors were used (slope range, 0.91-1.11, and r > 0.91). This study suggests that the incorporation of various amino acids into proteins can be used to estimate bacterial production.
Localised photoplethysmography imaging for heart rate estimation of pre-term infants in the clinic
Chaichulee, Sitthichok; Villarroel, Mauricio; Jorge, João.; Arteta, Carlos; Green, Gabrielle; McCormick, Kenny; Zisserman, Andrew; Tarassenko, Lionel
2018-02-01
Non-contact vital-sign estimation allows the monitoring of physiological parameters (such as heart rate, respiratory rate, and peripheral oxygen saturation) without contact electrodes or sensors. Our recent work has demonstrated that a convolutional neural network (CNN) can be used to detect the presence of a patient and segment the patient's skin area for vital-sign estimation, thus enabling the automatic continuous monitoring of vital signs in a hospital environment. In a study approved by the local Research Ethical Committee, we made video recordings of pre-term infants nursed in a Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford, UK. We extended the CNN model to detect the head, torso and diaper of the infants. We extracted multiple photoplethysmographic imaging (PPGi) signals from each body part, analysed their signal quality, and compared them with the PPGi signal derived from the entire skin area. Our results demonstrated the benefits of estimating heart rate combined from multiple regions of interest using data fusion. In the test dataset, we achieved a mean absolute error of 2.4 beats per minute for 80% (31.1 hours) from a total recording time of 38.5 hours for which both reference heart rate and video data were valid.
Blowout jets and impulsive eruptive flares in a bald-patch topology
Chandra, R.; Mandrini, C. H.; Schmieder, B.; Joshi, B.; Cristiani, G. D.; Cremades, H.; Pariat, E.; Nuevo, F. A.; Srivastava, A. K.; Uddin, W.
2017-02-01
Context. A subclass of broad extreme ultraviolet (EUV) and X-ray jets, called blowout jets, have become a topic of research since they could be the link between standard collimated jets and coronal mass ejections (CMEs). Aims: Our aim is to understand the origin of a series of broad jets, some of which are accompanied by flares and associated with narrow and jet-like CMEs. Methods: We analyze observations of a series of recurrent broad jets observed in AR 10484 on 21-24 October 2003. In particular, one of them occurred simultaneously with an M2.4 flare on 23 October at 02:41 UT (SOLA2003-10-23). Both events were observed by the ARIES Hα Solar Tower-Telescope, TRACE, SOHO, and RHESSI instruments. The flare was very impulsive and followed by a narrow CME. A local force-free model of AR 10484 is the basis to compute its topology. We find bald patches (BPs) at the flare site. This BP topology is present for at least two days before to events. Large-scale field lines, associated with the BPs, represent open loops. This is confirmed by a global potential free source surface (PFSS) model. Following the brightest leading edge of the Hα and EUV jet emission, we can temporarily associate these emissions with a narrow CME. Results: Considering their characteristics, the observed broad jets appear to be of the blowout class. As the most plausible scenario, we propose that magnetic reconnection could occur at the BP separatrices forced by the destabilization of a continuously reformed flux rope underlying them. The reconnection process could bring the cool flux-rope material into the reconnected open field lines driving the series of recurrent blowout jets and accompanying CMEs. Conclusions: Based on a model of the coronal field, we compute the AR 10484 topology at the location where flaring and blowout jets occurred from 21 to 24 October 2003. This topology can consistently explain the origin of these events. The movie associated to Fig. 1 is available at http://www.aanda.org
Orbital Blowout Fracture with Complete Dislocation of the Globe into the Maxillary Sinus.
Wang, Joy Mh; Fries, Fabian N; Hendrix, Philipp; Brinker, Titus; Loukas, Marios; Tubbs, R Shane
2017-09-29
This rare case report describes the diagnosis and treatment of an isolated left-sided orbital floor fracture with a complete dislocation of the globe into the maxillary sinus and briefly discusses the indications of surgery and recovery for orbital floor fractures in general. Complete herniation of the globe through an orbital blow-out fracture is uncommon. However, the current case illustrates that such an occurrence should be in the differential diagnosis and should be considered, especially following high speed/impact injuries involving a foreign object. In these rare cases, surgical intervention is required.
A blowout numerical model for the supernova remnant G352.7-0.1
Toledo Roy, J. C.; Velazquez, P. F.; Esquivel, A.; Giacani, Elsa Beatriz
2017-01-01
We present 3D hydrodynamical simulations of the Galactic supernova remnant G352.7−0.1. This remnant is peculiar for having a shell-like inner ring structure and an outer arc in radio observations. In our model, the supernova explosion producing the remnant occurs inside and near the border of a spherical cloud with a density higher than that of the surrounding interstellar medium. A blowout is produced when the remnant reaches the border of the cloud. We have then used the results of our hydr...
Longitudinal phase space characterization of the blow-out regime of rf photoinjector operation
J. T. Moody; P. Musumeci; M. S. Gutierrez; J. B. Rosenzweig; C. M. Scoby
2009-01-01
Using an experimental scheme based on a vertically deflecting rf deflector and a horizontally dispersing dipole, we characterize the longitudinal phase space of the beam in the blow-out regime at the UCLA Pegasus rf photoinjector. Because of the achievement of unprecedented resolution both in time (50 fs) and energy (1.0 keV), we are able to demonstrate some important properties of the beams created in this regime such as extremely low longitudinal emittance, large temporal energy chirp, and ...
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Evaluation of Estimating Missed Answers in Conners Adult ADHD Rating Scale (Screening Version)
Ghassemi, Farnaz; Moradi, Mohammad Hassan; Tehrani-Doost, Mehdi
2010-01-01
Objective Conners Adult ADHD Rating Scale (CAARS) is among the valid questionnaires for evaluating Attention-Deficit/Hyperactivity Disorder in adults. The aim of this paper is to evaluate the validity of the estimation of missed answers in scoring the screening version of the Conners questionnaire, and to extract its principal components. Method This study was performed on 400 participants. Answer estimation was calculated for each question (assuming the answer was missed), and then a Kruskal-Wallis test was performed to evaluate the difference between the original answer and its estimation. In the next step, principal components of the questionnaire were extracted by means of Principal Component Analysis (PCA). Finally the evaluation of differences in the whole groups was provided using the Multiple Comparison Procedure (MCP). Results Findings indicated that a significant difference existed between the original and estimated answers for some particular questions. However, the results of MCP showed that this estimation, when evaluated in the whole group, did not show a significant difference with the original value in neither of the questionnaire subscales. The results of PCA revealed that there are eight principal components in the CAARS questionnaire. Conclusion The obtained results can emphasize the fact that this questionnaire is mainly designed for screening purposes, and this estimation does not change the results of groups when a question is missed randomly. Notwithstanding this finding, more considerations should be paid when the missed question is a critical one. PMID:22952502
Evaluation of Estimating Missed Answers in Conners Adult ADHD Rating Scale (Screening Version
Directory of Open Access Journals (Sweden)
Vahid Abootalebi
2010-08-01
Full Text Available "n Objective: Conners Adult ADHD Rating Scale (CAARS is among the valid questionnaires for evaluating Attention-Deficit/Hyperactivity Disorder in adults. The aim of this paper is to evaluate the validity of the estimation of missed answers in scoring the screening version of the Conners questionnaire, and to extract its principal components. "n Method: This study was performed on 400 participants. Answer estimation was calculated for each question (assuming the answer was missed, and then a Kruskal-Wallis test was performed to evaluate the difference between the original answer and its estimation. In the next step, principal components of the questionnaire were extracted by means of Principal Component Analysis (PCA. Finally the evaluation of differences in the whole groups was provided using the Multiple Comparison Procedure (MCP. Results: Findings indicated that a significant difference existed between the original and estimated answers for some particular questions. However, the results of MCP showed that this estimation, when evaluated in the whole group, did not show a significant difference with the original value in neither of the questionnaire subscales. The results of PCA revealed that there are eight principal components in the CAARS questionnaire. Conclusion: The obtained results can emphasize the fact that this questionnaire is mainly designed for screening purposes, and this estimation does not change the results of groups when a question is missed randomly. Notwithstanding this finding, more considerations should be paid when the missed question is a critical one.
New methods for estimating follow-up rates in cohort studies
Directory of Open Access Journals (Sweden)
Xiaonan Xue
2017-12-01
Full Text Available Abstract Background The follow-up rate, a standard index of the completeness of follow-up, is important for assessing the validity of a cohort study. A common method for estimating the follow-up rate, the “Percentage Method”, defined as the fraction of all enrollees who developed the event of interest or had complete follow-up, can severely underestimate the degree of follow-up. Alternatively, the median follow-up time does not indicate the completeness of follow-up, and the reverse Kaplan-Meier based method and Clark’s Completeness Index (CCI also have limitations. Methods We propose a new definition for the follow-up rate, the Person-Time Follow-up Rate (PTFR, which is the observed person-time divided by total person-time assuming no dropouts. The PTFR cannot be calculated directly since the event times for dropouts are not observed. Therefore, two estimation methods are proposed: a formal person-time method (FPT in which the expected total follow-up time is calculated using the event rate estimated from the observed data, and a simplified person-time method (SPT that avoids estimation of the event rate by assigning full follow-up time to all events. Simulations were conducted to measure the accuracy of each method, and each method was applied to a prostate cancer recurrence study dataset. Results Simulation results showed that the FPT has the highest accuracy overall. In most situations, the computationally simpler SPT and CCI methods are only slightly biased. When applied to a retrospective cohort study of cancer recurrence, the FPT, CCI and SPT showed substantially greater 5-year follow-up than the Percentage Method (92%, 92% and 93% vs 68%. Conclusions The Person-time methods correct a systematic error in the standard Percentage Method for calculating follow-up rates. The easy to use SPT and CCI methods can be used in tandem to obtain an accurate and tight interval for PTFR. However, the FPT is recommended when event rates and
International Nuclear Information System (INIS)
Zal U'yun Wan Mahmood; Zaharudin Ahmad; Abdul Kadir Ishak; Che Abd Rahim Mohamed
2011-01-01
A total of eight sediment cores with 50 cm length were taken in the Sabah and Sarawak coastal waters using a gravity corer in 2004 to estimate sedimentation rates using four mathematical models of CIC, Shukla-CIC, CRS and ADE. The average of sedimentation rate ranged from 0.24 to 0.48 cm year -1 , which is calculated based on the vertical profile of 210 Pbex in sediment core. The finding also showed that the sedimentation rates derived from four models were generally shown in good agreement with similar or comparable value at some stations. However, based on statistical analysis of paired sample t-test indicated that CIC model was the most accurate, reliable and suitable technique to determine the sedimentation rate in the coastal area. (author)
Estimates of wave decay rates in the presence of turbulent currents
Energy Technology Data Exchange (ETDEWEB)
Thais, L. [Universite des Sciences et Technologies de Lille, URA-CNRS 1441, Villenauve d' Ascq (France). Lab. de Mecanique; Chapalain, G. [Universite des Sciences et Technologies de Lille, URA-CNRS 8577, Villenauve d' Ascq (France). Sedimentologie et Geodynamique; Klopman, G. [Albatros Flow Research, Vollenhove (Netherlands); Simons, R.R. [University College, London (United Kingdom). Civil and Environmental Engineering; Thomas, G.P. [University College, Cork (Ireland). Dept. of Mathematical Physics
2001-06-01
A full-depth numerical model solving the free surface flow induced by linear water waves propagating with collinear vertically sheared turbulent currents is presented. The model is used to estimate the wave amplitude decay rate in combined wave current flows. The decay rates are compared with data collected in wave flumes by Kemp and Simons [J Fluid Mech, 116 (1982) 227; 130 (1983) 73] and Mathisen and Madsen [J Geophys Res, 101 (C7) (1996) 16,533]. We confirm the main experimental finding of Kemp and Simons that waves propagating downstream are less damped, and waves propagating upstream significantly more damped than waves on fluid at rest. A satisfactory quantitative agreement is found for the decay rates of waves propagating upstream, whereas not more than a qualitative agreement has been observed for waves propagating downstream. Finally, some wave decay rates in the presence of favourable and adverse currents are provided in typical field conditions. (Author)
International Nuclear Information System (INIS)
Phillips, William M.
2000-01-01
A numerical model relating spatially averaged rates of cumulative soil accumulation and hillslope erosion to cosmogenic nuclide distribution in depth profiles is presented. Model predictions are compared with cosmogenic 21 Ne and AMS radiocarbon data from soils of the Pajarito Plateau, New Mexico. Rates of soil accumulation and hillslope erosion estimated by cosmogenic 21 Ne are significantly lower than rates indicated by radiocarbon and regional soil-geomorphic studies. The low apparent cosmogenic erosion rates are artifacts of high nuclide inheritance in cumulative soil parent material produced from erosion of old soils on hillslopes. In addition, 21 Ne profiles produced under conditions of rapid accumulation (>0.1 cm/a) are difficult to distinguish from bioturbated soil profiles. Modeling indicates that while 10 Be profiles will share this problem, both bioturbation and anomalous inheritance can be identified with measurement of in situ-produced 14 C
Can the cerebral metabolic rate of oxygen be estimated with near-infrared spectroscopy?
International Nuclear Information System (INIS)
Boas, D A; Strangman, G; Culver, J P; Hoge, R D; Jasdzewski, G; Poldrack, R A; Rosen, B R; Mandeville, J B
2003-01-01
We have measured the changes in oxy-haemoglobin and deoxy-haemoglobin in the adult human brain during a brief finger tapping exercise using near-infrared spectroscopy (NIRS). The cerebral metabolic rate of oxygen (CMRO 2 ) can be estimated from these NIRS data provided certain model assumptions. The change in CMRO 2 is related to changes in the total haemoglobin concentration, deoxy-haemoglobin concentration and blood flow. As NIRS does not provide a measure of dynamic changes in blood flow during brain activation, we relied on a Windkessel model that relates dynamic blood volume and flow changes, which has been used previously for estimating CMRO 2 from functional magnetic resonance imaging (fMRI) data. Because of the partial volume effect we are unable to quantify the absolute changes in the local brain haemoglobin concentrations with NIRS and thus are unable to obtain an estimate of the absolute CMRO 2 change. An absolute estimate is also confounded by uncertainty in the flow-volume relationship. However, the ratio of the flow change to the CMRO 2 change is relatively insensitive to these uncertainties. For the finger tapping task, we estimate a most probable flow-consumption ratio ranging from 1.5 to 3 in agreement with previous findings presented in the literature, although we cannot exclude the possibility that there is no CMRO 2 change. The large range in the ratio arises from the large number of model parameters that must be estimated from the data. A more precise estimate of the flow-consumption ratio will require better estimates of the model parameters or flow information, as can be provided by combining NIRS with fMRI
Evidence-based estimate of appropriate radiotherapy utilization rate for prostate cancer
International Nuclear Information System (INIS)
Foroudi, Farshad; Tyldesley, Scott; Barbera, Lisa; Huang, Jenny; Mackillop, William J.
2003-01-01
Purpose: Current estimates of the proportion of cancer patients who will require radiotherapy (RT) are based almost entirely on expert opinion. The objective of this study was to use an evidence-based approach to estimate the proportion of incident cases of prostate cancer that should receive RT at any point in the evolution of the illness. Methods and Materials: A systematic review of the literature was undertaken to identify indications for RT for prostate cancer and to ascertain the level of evidence that supported each indication. An epidemiologic approach was then used to estimate the incidence of each indication for RT in a typical North American population of prostate cancer patients. The effect of sampling error on the estimated appropriate rate of RT was calculated mathematically, and the effect of systematic error using alternative sources of information was estimated by sensitivity analysis. Results: It was estimated that 61.2% ±5.6% of prostate cancer cases develop one or more indications for RT at some point in the course of the illness. The plausible range for this rate was 57.3%-69.8% on sensitivity analysis. Of all prostate cancer patients, 32.2%±3.8% should receive RT in their initial treatment and 29.0% ± 4.1% later for recurrence or progression. The proportion of cases that ever require RT is risk grouping dependent; 43.9%±2.2% in low-risk disease, 68.7%± .5% in intermediate-risk disease; and 79.0% ± 3.8% in high-risk locoregional disease. For metastatic disease, the predicted rate was 66.4%±0.3%. Conclusion: This method provides a rational starting point for the long-term planning of radiation services and for the audit of access to RT at the population level. By completing such evaluations in major cancer sites, it will be possible to estimate the appropriate RT rate for the cancer population as a whole
[Estimating glomerular filtration rate in 2012: which adding value for the CKD-EPI equation?].
Delanaye, Pierre; Mariat, Christophe; Moranne, Olivier; Cavalier, Etienne; Flamant, Martin
2012-07-01
Measuring or estimating glomerular filtration rate (GFR) is still considered as the best way to apprehend global renal function. In 2009, the new Chronic Kidney Disease Epidemiology (CKD-EPI) equation has been proposed as a better estimator of GFR than the Modification of Diet in Renal Disease (MDRD) study equation. This new equation is supposed to underestimate GFR to a lesser degree in higher GFR levels. In this review, we will present and deeply discuss the performances of this equation. Based on articles published between 2009 and 2012, this review will underline advantages, notably the better knowledge of chronic kidney disease prevalence, but also limitations of this new equation, especially in some specific populations. We eventually insist on the fact that all these equations are estimations and nephrologists should remain cautious in their interpretation. Copyright © 2012 Association Société de néphrologie. Published by Elsevier SAS. All rights reserved.
Evaluation and comparison of estimation methods for failure rates and probabilities
Energy Technology Data Exchange (ETDEWEB)
Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)
2006-02-01
An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.
Joint sensor location/power rating optimization for temporally-correlated source estimation
Bushnaq, Osama M.
2017-12-22
The optimal sensor selection for scalar state parameter estimation in wireless sensor networks is studied in the paper. A subset of N candidate sensing locations is selected to measure a state parameter and send the observation to a fusion center via wireless AWGN channel. In addition to selecting the optimal sensing location, the sensor type to be placed in these locations is selected from a pool of T sensor types such that different sensor types have different power ratings and costs. The sensor transmission power is limited based on the amount of energy harvested at the sensing location and the type of the sensor. The Kalman filter is used to efficiently obtain the MMSE estimator at the fusion center. Sensors are selected such that the MMSE estimator error is minimized subject to a prescribed system budget. This goal is achieved using convex relaxation and greedy algorithm approaches.
An estimation of finger-tapping rates and load capacities and the effects of various factors.
Ekşioğlu, Mahmut; İşeri, Ali
2015-06-01
The aim of this study was to estimate the finger-tapping rates and finger load capacities of eight fingers (excluding thumbs) for a healthy adult population and investigate the effects of various factors on tapping rate. Finger-tapping rate, the total number of finger taps per unit of time, can be used as a design parameter of various products and also as a psychomotor test for evaluating patients with neurologic problems. A 1-min tapping task was performed by 148 participants with maximum volitional tempo for each of eight fingers. For each of the tapping tasks, the participant with the corresponding finger tapped the associated key in the standard position on the home row of a conventional keyboard for touch typing. The index and middle fingers were the fastest fingers for both hands, and little fingers the slowest. All dominant-hand fingers, except little finger, had higher tapping rates than the fastest finger of the nondominant hand. Tapping rate decreased with age and smokers tapped faster than nonsmokers. Tapping duration and exercise had also significant effect on tapping rate. Normative data of tapping rates and load capacities of eight fingers were estimated for the adult population. In designs of psychomotor tests that require the use of tapping rate or finger load capacity data, the effects of finger, age, smoking, and tapping duration need to be taken into account. The findings can be used for ergonomic designs requiring finger-tapping capacity and also as a reference in psychomotor tests. © 2015, Human Factors and Ergonomics Society.
The importance of ingestion rates for estimating food quality and energy intake.
Schülke, Oliver; Chalise, Mukesh K; Koenig, Andreas
2006-10-01
Testing ecological or socioecological models in primatology often requires estimates of individual energy intake. It is a well established fact that the nutrient content (and hence the energy content) of primate food items is highly variable. The second variable in determining primate energy intake, i.e., the ingestion rate, has often been ignored, and few studies have attempted to estimate the relative importance of the two predictors. In the present study individual ingestion rates were measured in two ecologically very different populations of Hanuman langurs (Semnopithecus entellus) at Jodhpur, India, and Ramnagar, Nepal. Protein and soluble sugar concentrations in 50 and 100 food items. respectively, were measured using standardized methods. Variation in ingestion rates (gram of dry matter per minute) was markedly greater among food items than among langur individuals in both populations, but did not differ systematically among food item categories defined according to plant part and age. General linear models (GLMs) with ingestion rate, protein, and soluble sugar content explained 40-80% of the variation in energy intake rates (kJ/min). The relative importance of ingestion rates was either similar (Ramnagar) or much greater (Jodhpur) than the role of sugar and/or protein content in determining the energy intake rates of different items. These results may impact socioecological studies of variation in individual energy budgets, investigations of food choice in relation to chemical composition or sensory characteristics, and research into habitat preferences that measures habitat quality in terms of abundance of important food sources. We suggest a definition of food quality that includes not only the amount of valuable food contents (energy, vitamins, and minerals) and the digestibility of different foods, but also the rate at which the food can be harvested and processed. Such an extended definition seems necessary because time may constrain primates when
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Directory of Open Access Journals (Sweden)
Broberg Per
2005-08-01
Full Text Available Abstract Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information
Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation
DEFF Research Database (Denmark)
Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed
2017-01-01
Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented...... is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared...
Estimation of groundwater flow rate using the decay of 222Rn in a well
International Nuclear Information System (INIS)
Hamada, Hiromasa
1999-01-01
A method of estimating groundwater flow rate using the decay of 222 Rn in a well was investigated. Field application revealed that infiltrated water (i.e., precipitation, pond water and irrigation water) accelerated groundwater flow. In addition, the depth at which groundwater was influenced by surface water was determined. The velocity of groundwater in a test well was estimated to be of the order of 10 -6 cm s -1 , based on the ratio of 222 Rn concentration in groundwater before and after it flowed into the well. This method is applicable for monitoring of groundwater flow rate where the velocity in a well is from 10 -5 to 10 -6 cm s -1
Directory of Open Access Journals (Sweden)
Jianxin Feng
2014-01-01
Full Text Available The recursive estimation problem is studied for a class of uncertain dynamical systems with different delay rates sensor network and autocorrelated process noises. The process noises are assumed to be autocorrelated across time and the autocorrelation property is described by the covariances between different time instants. The system model under consideration is subject to multiplicative noises or stochastic uncertainties. The sensor delay phenomenon occurs in a random way and each sensor in the sensor network has an individual delay rate which is characterized by a binary switching sequence obeying a conditional probability distribution. By using the orthogonal projection theorem and an innovation analysis approach, the desired recursive robust estimators including recursive robust filter, predictor, and smoother are obtained. Simulation results are provided to demonstrate the effectiveness of the proposed approaches.
Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora
Directory of Open Access Journals (Sweden)
Ryosuke Takahira
2016-10-01
Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.
Enhancement of leak rate estimation model for corroded cracked thin tubes
International Nuclear Information System (INIS)
Chang, Y.S.; Jeong, J.U.; Kim, Y.J.; Hwang, S.S.; Kim, H.P.
2010-01-01
During the last couple of decades, lots of researches on structural integrity assessment and leak rate estimation have been carried out to prevent unanticipated catastrophic failures of pressure retaining nuclear components. However, from the standpoint of leakage integrity, there are still some arguments for predicting the leak rate of cracked components due primarily to uncertainties attached to various parameters in flow models. The purpose of present work is to suggest a leak rate estimation method for thin tubes with artificial cracks. In this context, 23 leak rate tests are carried out for laboratory generated stress corrosion cracked tube specimens subjected to internal pressure. Engineering equations to calculate crack opening displacements are developed from detailed three-dimensional elastic-plastic finite element analyses and then a simplified practical model is proposed based on the equations as well as test data. Verification of the proposed method is done through comparing leak rates and it will enable more reliable design and/or operation of thin tubes.
Real-time data for estimating a forward-looking interest rate rule of the ECB
Directory of Open Access Journals (Sweden)
Tilman Bletzinger
2017-12-01
Full Text Available The purpose of the data presented in this article is to use it in ex post estimations of interest rate decisions by the European Central Bank (ECB, as it is done by Bletzinger and Wieland (2017 [1]. The data is of quarterly frequency from 1999 Q1 until 2013 Q2 and consists of the ECB's policy rate, inflation rate, real output growth and potential output growth in the euro area. To account for forward-looking decision making in the interest rate rule, the data consists of expectations about future inflation and output dynamics. While potential output is constructed based on data from the European Commission's annual macro-economic database, inflation and real output growth are taken from two different sources both provided by the ECB: the Survey of Professional Forecasters and projections made by ECB staff. Careful attention was given to the publication date of the collected data to ensure a real-time dataset only consisting of information which was available to the decision makers at the time of the decision. Keywords: Interest rate rule estimation, Real-time data, Forward-looking data
Estimation of the production rate of bacteria in the rumen of buffalo calves
International Nuclear Information System (INIS)
Singh, U.B.; Verma, D.N.; Varma, A.; Ranjhan, S.K.
1976-01-01
The rate of bacterial cell growth in the rumen of buffalo calves has been measured applying the isotope dilution technique by injecting labelled mixed rumen bacteria into the rumen, and expressing the specific radioactivity either per mg dry bacterial cells or per μg DAPA in whole rumen samples. The animals were fed daily about 15-20 kg chopped green maize in 12 equal amounts at 2-h intervals. There was no significant difference in the rate of production of bacteria estimated by either method. (author)
DEFF Research Database (Denmark)
Rasmussen, Christian Lund; Skjøth-Rasmussen, Martin Skov; Jensen, Anker
2005-01-01
The most important cyclization reaction in hydrocarbon flames is probably recombination of propargyl radicals. This reaction may, depending on reaction conditions, form benzene, phenyl or fulvene, as well as a range of linear products. A number of rate measurements have been reported for C3H3 + C3H......3 at temperatures below 1000 K, while data at high temperature and low pressure only can be obtained from flames. In the present work, an estimate of the rate constant for the reaction at 1400 +/- 50 K and 20 Torr is obtained from analysis of the fuel-rich acetylene flame of Westmoreland, Howard...
Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?*
Kos, Maciej; Li, Xuan; Khaghani-Far, Iman; Gordon, Christine M.; Pavel, Misha; Jimison Member, Holly B.
2018-01-01
A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data. PMID:29060185
Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?
Kos, Maciej; Xuan Li; Khaghani-Far, Iman; Gordon, Christine M; Pavel, Misha; Jimison, Holly B
2017-07-01
A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data.
Energy Technology Data Exchange (ETDEWEB)
Lee, L.M.; Clayton, M.; Everingham, J.; Harding, R.C.; Massa, A.
1982-06-01
A comparison of background and potential geopressured geothermal development-related subsidence rates is given. Estimated potential geopressured-related rates at six prospects are presented. The effect of subsidence on the Texas-Louisiana Gulf Coast is examined including the various associated ground movements and the possible effects of these ground movements on surficial processes. The relationships between ecosystems and subsidence, including the capability of geologic and biologic systems to adapt to subsidence, are analyzed. The actual potential for environmental impact caused by potential geopressured-related subsidence at each of four prospects is addressed. (MHR)
Estimation of Leak Rate from the Emergency Pump Well in L-Area Complex Basin
International Nuclear Information System (INIS)
Duncan, A
2005-01-01
This report provides an estimate of the leak rate from the emergency pump well in L-basin that is to be expected during an off-normal event. This estimate is based on expected shrinkage of the engineered grout (i.e., controlled low strength material) used to fill the emergency pump well and the header pipes that provide the dominant leak path from the basin to the lower levels of the L-Area Complex. The estimate will be used to provide input into the operating safety basis to ensure that the water level in the basin will remain above a certain minimum level. The minimum basin water level is specified to ensure adequate shielding for personnel and maintain the ''as low as reasonably achievable'' concept of radiological exposure. The need for the leak rate estimation is the existence of a gap between the fill material and the header pipes, which penetrate the basin wall and would be the primary leak path in the event of a breach in those pipes. The gap between the pipe and fill material was estimated based on a full scale demonstration pour that was performed and examined. Leak tests were performed on full scale pipes as a part of this examination. Leak rates were measured to be on the order of 0.01 gallons/minute for completely filled pipe (vertically positioned) and 0.25 gallons/minute for partially filled pipe (horizontally positioned). This measurement was for water at 16 feet head pressure and with minimal corrosion or biofilm present. The effect of the grout fill on the inside surface biofilm of the pipes is the subject of a previous memorandum
Cooper, Steven J.; Wood, Norman B.; L'Ecuyer, Tristan S.
2017-07-01
Estimates of snowfall rate as derived from radar reflectivities alone are non-unique. Different combinations of snowflake microphysical properties and particle fall speeds can conspire to produce nearly identical snowfall rates for given radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200 % for individual events. Here, we use observations of particle size distribution (PSD), fall speed, and snowflake habit from the Multi-Angle Snowflake Camera (MASC) to constrain estimates of snowfall derived from Ka-band ARM zenith radar (KAZR) measurements at the Atmospheric Radiation Measurement (ARM) North Slope Alaska (NSA) Climate Research Facility site at Barrow. MASC measurements of microphysical properties with uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Use of the MASC fall speed, MASC PSD, and CloudSat snow particle model as base assumptions resulted in retrieved total accumulations with a -18 % difference relative to nearby National Weather Service (NWS) observations over five snow events. The average error was 36 % for the individual events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -64 to +122 % for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fall speed and habit, suggesting that in situ measurements can help to constrain key snowfall retrieval uncertainties. More accurate knowledge of these properties dependent upon location and meteorological conditions should help refine and improve ground- and space-based radar estimates of snowfall.
Yukilevich, Roman
2014-04-01
Among the most debated subjects in speciation is the question of its mode. Although allopatric (geographical) speciation is assumed the null model, the importance of parapatric and sympatric speciation is extremely difficult to assess and remains controversial. Here I develop a novel approach to distinguish these modes of speciation by studying the evolution of reproductive isolation (RI) among taxa. I focus on the Drosophila genus, for which measures of RI are known. First, I incorporate RI into age-range correlations. Plots show that almost all cases of weak RI are between allopatric taxa whereas sympatric taxa have strong RI. This either implies that most reproductive isolation (RI) was initiated in allopatry or that RI evolves too rapidly in sympatry to be captured at incipient stages. To distinguish between these explanations, I develop a new "rate test of speciation" that estimates the likelihood of non-allopatric speciation given the distribution of RI rates in allopatry versus sympatry. Most sympatric taxa were found to have likely initiated RI in allopatry. However, two putative candidate species pairs for non-allopatric speciation were identified (5% of known Drosophila). In total, this study shows how using RI measures can greatly inform us about the geographical mode of speciation in nature. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
International Nuclear Information System (INIS)
Tu Fenghua; Liao Xiaofeng
2005-01-01
We study the problem of estimating the exponential convergence rate and exponential stability for neural networks with time-varying delay. Some criteria for exponential stability are derived by using the linear matrix inequality (LMI) approach. They are less conservative than the existing ones. Some analytical methods are employed to investigate the bounds on the interconnection matrix and activation functions so that the systems are exponentially stable
Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.
Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle
2008-08-01
Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.
Rate estimation in partially observed Markov jump processes with measurement errors
Amrein, Michael; Kuensch, Hans R.
2010-01-01
We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced, which allow sampling from the posterior distribution of t...
Erratum Haldane and the first estimates of the human mutation rate
Indian Academy of Sciences (India)
Published on the Web: 1 December 2008. Erratum. Haldane and the first estimates of the human mutation rate. (A commentary on J.B.S. Haldane 1935 J. Genet. 31, 317–326; reprinted in volume 83, 235–244 as a J. Genet. classic). Michael W. Nachman. J. Genet. 83, 231–233. Page 1, right column, para 1, line 6 from ...
Exponential convergence rate estimation for uncertain delayed neural networks of neutral type
International Nuclear Information System (INIS)
Lien, C.-H.; Yu, K.-W.; Lin, Y.-F.; Chung, Y.-J.; Chung, L.-Y.
2009-01-01
The global exponential stability for a class of uncertain delayed neural networks (DNNs) of neutral type is investigated in this paper. Delay-dependent and delay-independent criteria are proposed to guarantee the robust stability of DNNs via LMI and Razumikhin-like approaches. For a given delay, the maximal allowable exponential convergence rate will be estimated. Some numerical examples are given to illustrate the effectiveness of our results. The simulation results reveal significant improvement over the recent results.
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Rating curve estimation using Envisat virtual stations on the main Orinoco river
Directory of Open Access Journals (Sweden)
Juan León
2011-09-01
Full Text Available Rating curve estimation (height-stream relation made by hydrometric stations representing cross-sections of a river is one of hydrometrics’ fundamental tasks due to the fact that it leads to deducing a river’s average daily flow on that particular section. This information is fundamental in any attempt at hydrological modelling. However, the number of hydrological control stations monitoring large hydrological basins has been reduced worldwide. Space hydrology studies during the last five years have shown that satellite radar altimetry means that hydrological monitoring networks’ available information can be densified due to the introduction of so-called virtual stations and the joint use of such information along with in-situ measured flow records for estimating expenditure curves at these stations. This study presents the rating curves for 4 Envisat virtual stations located on the main stream of the Orinoco River. Virtual stations’ flows were estimated by using the Muskingum- Cunge 1D model. There was less than 1% error between measured and estimated flows. The methodology led to reducing average zero flow depth; in this case, it led to depths ranging from 11 to 20 meters being found along the 130 km of the Orinoco River represented by the virtual stations being considered.
Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin
2017-10-01
In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.
Spacecraft Angular Rates Estimation with Gyrowheel Based on Extended High Gain Observer
Directory of Open Access Journals (Sweden)
Xiaokun Liu
2016-04-01
Full Text Available A gyrowheel (GW is a kind of electronic electric-mechanical servo system, which can be applied to a spacecraft attitude control system (ACS as both an actuator and a sensor simultaneously. In order to solve the problem of two-dimensional spacecraft angular rate sensing as a GW outputting three-dimensional control torque, this paper proposed a method of an extended high gain observer (EHGO with the derived GW mathematical model to implement the spacecraft angular rate estimation when the GW rotor is working at large angles. For this purpose, the GW dynamic equation is firstly derived with the second kind Lagrange method, and the relationship between the measurable and unmeasurable variables is built. Then, the EHGO is designed to estimate and calculate spacecraft angular rates with the GW, and the stability of the designed EHGO is proven by the Lyapunov function. Moreover, considering the engineering application, the effect of measurement noise in the tilt angle sensors on the estimation accuracy of the EHGO is analyzed. Finally, the numerical simulation is performed to illustrate the validity of the method proposed in this paper.
Cutter, Asher D
2008-04-01
Accurate inference of the dates of common ancestry among species forms a central problem in understanding the evolutionary history of organisms. Molecular estimates of divergence time rely on the molecular evolutionary prediction that neutral mutations and substitutions occur at the same constant rate in genomes of related species. This underlies the notion of a molecular clock. Most implementations of this idea depend on paleontological calibration to infer dates of common ancestry, but taxa with poor fossil records must rely on external, potentially inappropriate, calibration with distantly related species. The classic biological models Caenorhabditis and Drosophila are examples of such problem taxa. Here, I illustrate internal calibration in these groups with direct estimates of the mutation rate from contemporary populations that are corrected for interfering effects of selection on the assumption of neutrality of substitutions. Divergence times are inferred among 6 species each of Caenorhabditis and Drosophila, based on thousands of orthologous groups of genes. I propose that the 2 closest known species of Caenorhabditis shared a common ancestor <24 MYA (Caenorhabditis briggsae and Caenorhabditis sp. 5) and that Caenorhabditis elegans diverged from its closest known relatives <30 MYA, assuming that these species pass through at least 6 generations per year; these estimates are much more recent than reported previously with molecular clock calibrations from non-nematode phyla. Dates inferred for the common ancestor of Drosophila melanogaster and Drosophila simulans are roughly concordant with previous studies. These revised dates have important implications for rates of genome evolution and the origin of self-fertilization in Caenorhabditis.
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.
Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris
2018-03-01
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Contraceptive failure rates: new estimates from the 1995 National Survey of Family Growth.
Fu, H; Darroch, J E; Haas, T; Ranjit, N
1999-01-01
Unintended pregnancy remains a major public health concern in the United States. Information on pregnancy rates among contraceptive users is needed to guide medical professionals' recommendations and individuals' choices of contraceptive methods. Data were taken from the 1995 National Survey of Family Growth (NSFG) and the 1994-1995 Abortion Patient Survey (APS). Hazards models were used to estimate method-specific contraceptive failure rates during the first six months and during the first year of contraceptive use for all U.S. women. In addition, rates were corrected to take into account the underreporting of induced abortion in the NSFG. Corrected 12-month failure rates were also estimated for subgroups of women by age, union status, poverty level, race or ethnicity, and religion. When contraceptive methods are ranked by effectiveness over the first 12 months of use (corrected for abortion underreporting), the implant and injectables have the lowest failure rates (2-3%), followed by the pill (8%), the diaphragm and the cervical cap (12%), the male condom (14%), periodic abstinence (21%), withdrawal (24%) and spermicides (26%). In general, failure rates are highest among cohabiting and other unmarried women, among those with an annual family income below 200% of the federal poverty level, among black and Hispanic women, among adolescents and among women in their 20s. For example, adolescent women who are not married but are cohabiting experience a failure rate of about 31% in the first year of contraceptive use, while the 12-month failure rate among married women aged 30 and older is only 7%. Black women have a contraceptive failure rate of about 19%, and this rate does not vary by family income; in contrast, overall 12-month rates are lower among Hispanic women (15%) and white women (10%), but vary by income, with poorer women having substantially greater failure rates than more affluent women. Levels of contraceptive failure vary widely by method, as well as by
Three-Axis Attitude Estimation With a High-Bandwidth Angular Rate Sensor
Bayard, David S.; Green, Joseph J.
2013-01-01
A continuing challenge for modern instrument pointing control systems is to meet the increasingly stringent pointing performance requirements imposed by emerging advanced scientific, defense, and civilian payloads. Instruments such as adaptive optics telescopes, space interferometers, and optical communications make unprecedented demands on precision pointing capabilities. A cost-effective method was developed for increasing the pointing performance for this class of NASA applications. The solution was to develop an attitude estimator that fuses star tracker and gyro measurements with a high-bandwidth angular rotation sensor (ARS). An ARS is a rate sensor whose bandwidth extends well beyond that of the gyro, typically up to 1,000 Hz or higher. The most promising ARS sensor technology is based on a magnetohydrodynamic concept, and has recently become available commercially. The key idea is that the sensor fusion of the star tracker, gyro, and ARS provides a high-bandwidth attitude estimate suitable for supporting pointing control with a fast-steering mirror or other type of tip/tilt correction for increased performance. The ARS is relatively inexpensive and can be bolted directly next to the gyro and star tracker on the spacecraft bus. The high-bandwidth attitude estimator fuses an ARS sensor with a standard three-axis suite comprised of a gyro and star tracker. The estimation architecture is based on a dual-complementary filter (DCF) structure. The DCF takes a frequency- weighted combination of the sensors such that each sensor is most heavily weighted in a frequency region where it has the lowest noise. An important property of the DCF is that it avoids the need to model disturbance torques in the filter mechanization. This is important because the disturbance torques are generally not known in applications. This property represents an advantage over the prior art because it overcomes a weakness of the Kalman filter that arises when fusing more than one rate
International Nuclear Information System (INIS)
Murphy, E.M.
1998-01-01
Changes in geochemistry and stable isotopes along a well-established groundwater flow path were used to estimate in situ microbial respiration rates in the Middendorf aquifer in the southeastern United States. Respiration rates were determined for individual terminal electron acceptors including O 2 , MnO 2 , Fe 3+ , and SO 4 2- . The extent of biotic reactions were constrained by the fractionation of stable isotopes of carbon and sulfur. Sulfur isotopes and the presence of sulfur-oxidizing microorganisms indicated that sulfate is produced through the oxidation of reduced sulfur species in the aquifer and not by the dissolution of gypsum, as previously reported. The respiration rates varied along the flow path as the groundwater transitioned between primarily oxic to anoxic conditions. Iron-reducing microorganisms were the largest contributors to the oxidation of organic matter along the portion of the groundwater flow path investigated in this study. The transition zone between oxic and anoxic groundwater contained a wide range of terminal electron acceptors and showed the greatest diversity and numbers of culturable microorganisms and the highest respiration rates. A comparison of respiration rates measured from core samples and pumped groundwater suggests that variability in respiration rates may often reflect the measurement scales, both in the sample volume and the time-frame over which the respiration measurement is averaged. Chemical heterogeneity may create a wide range of respiration rates when the scale of the observation is below the scale of the heterogeneity
Dose rate estimates from irradiated light-water-reactor fuel assemblies in air
International Nuclear Information System (INIS)
Lloyd, W.R.; Sheaffer, M.K.; Sutcliffe, W.G.
1994-01-01
It is generally considered that irradiated spent fuel is so radioactive (self-protecting) that it can only be moved and processed with specialized equipment and facilities. However, a small, possibly subnational, group acting in secret with no concern for the environment (other than the reduction of signatures) and willing to incur substantial but not lethal radiation doses, could obtain plutonium by stealing and processing irradiated spent fuel that has cooled for several years. In this paper, we estimate the dose rate at various distances and directions from typical pressurized-water reactor (PWR) and boiling-water reactor (BWR) spent-fuel assemblies as a function of cooling time. Our results show that the dose rate is reduced rapidly for the first ten years after exposure in the reactor, and that it is reduced by a factor of ∼10 (from the one year dose rate) after 15 years. Even for fuel that has cooled for 15 years, a lethal dose (LD50) of 450 rem would be received at 1 m from the center of the fuel assembly after several minutes. However, moving from 1 to 5 m reduces the dose rate by over a factor of 10, and moving from 1 to 10 m reduces the dose rate by about a factor of 50. The dose rates 1 m from the top or bottom of the assembly are considerably less (about 10 and 22%, respectively) than 1 m from the center of the assembly, which is the direction of the maximum dose rate
State-space dynamic model for estimation of radon entry rate, based on Kalman filtering
International Nuclear Information System (INIS)
Brabec, Marek; Jilek, Karel
2007-01-01
To predict the radon concentration in a house environment and to understand the role of all factors affecting its behavior, it is necessary to recognize time variation in both air exchange rate and radon entry rate into a house. This paper describes a new approach to the separation of their effects, which effectively allows continuous estimation of both radon entry rate and air exchange rate from simultaneous tracer gas (carbon monoxide) and radon gas measurement data. It is based on a state-space statistical model which permits quick and efficient calculations. Underlying computations are based on (extended) Kalman filtering, whose practical software implementation is easy. Key property is the model's flexibility, so that it can be easily adjusted to handle various artificial regimens of both radon gas and CO gas level manipulation. After introducing the statistical model formally, its performance will be demonstrated on real data from measurements conducted in our experimental, naturally ventilated and unoccupied room. To verify our method, radon entry rate calculated via proposed statistical model was compared with its known reference value. The results from several days of measurement indicated fairly good agreement (up to 5% between reference value radon entry rate and its value calculated continuously via proposed method, in average). Measured radon concentration moved around the level approximately 600 Bq m -3 , whereas the range of air exchange rate was 0.3-0.8 (h -1 )
Estimated rate of agricultural injury: the Korean Farmers’ Occupational Disease and Injury Survey
2014-01-01
Objectives This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. Methods The first Korean Farmers’ Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. Results We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45–0.45) compared to males. However, the odds of injury among farmers aged 50–59 (OR: 1.53, 95% CI = 1.46–1.60), 60–69 (OR: 1.45, 95% CI = 1.39–1.51), and ≥70 (OR: 1.94, 95% CI = 1.86–2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Conclusions Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required. PMID:24808945
Estimated rate of agricultural injury: the Korean Farmers' Occupational Disease and Injury Survey.
Chae, Hyeseon; Min, Kyungdoo; Youn, Kanwoo; Park, Jinwoo; Kim, Kyungran; Kim, Hyocher; Lee, Kyungsuk
2014-01-01
This study estimated the rate of agricultural injury using a nationwide survey and identified factors associated with these injuries. The first Korean Farmers' Occupational Disease and Injury Survey (KFODIS) was conducted by the Rural Development Administration in 2009. Data from 9,630 adults were collected through a household survey about agricultural injuries suffered in 2008. We estimated the injury rates among those whose injury required an absence of more than 4 days. Logistic regression was performed to identify the relationship between the prevalence of agricultural injuries and the general characteristics of the study population. We estimated that 3.2% (±0.00) of Korean farmers suffered agricultural injuries that required an absence of more than 4 days. The injury rates among orchard farmers (5.4 ± 0.00) were higher those of all non-orchard farmers. The odds ratio (OR) for agricultural injuries was significantly lower in females (OR: 0.45, 95% CI = 0.45-0.45) compared to males. However, the odds of injury among farmers aged 50-59 (OR: 1.53, 95% CI = 1.46-1.60), 60-69 (OR: 1.45, 95% CI = 1.39-1.51), and ≥70 (OR: 1.94, 95% CI = 1.86-2.02) were significantly higher compared to those younger than 50. In addition, the total number of years farmed, average number of months per year of farming, and average hours per day of farming were significantly associated with agricultural injuries. Agricultural injury rates in this study were higher than rates reported by the existing compensation insurance data. Males and older farmers were at a greater risk of agriculture injuries; therefore, the prevention and management of agricultural injuries in this population is required.
Energy Technology Data Exchange (ETDEWEB)
Dunning, D.E. Jr.; Leggett, R.W.; Yalcintas, M.G.
1980-12-01
The work described in the report is basically a synthesis of two previously existing computer codes: INREM II, developed at the Oak Ridge National Laboratory (ORNL); and CAIRD, developed by the Environmental Protection Agency (EPA). The INREM II code uses contemporary dosimetric methods to estimate doses to specified reference organs due to inhalation or ingestion of a radionuclide. The CAIRD code employs actuarial life tables to account for competing risks in estimating numbers of health effects resulting from exposure of a cohort to some incremental risk. The combined computer code, referred to as RADRISK, estimates numbers of health effects in a hypothetical cohort of 100,000 persons due to continuous lifetime inhalation or ingestion of a radionuclide. Also briefly discussed in this report is a method of estimating numbers of health effects in a hypothetical cohort due to continuous lifetime exposure to external radiation. This method employs the CAIRD methodology together with dose conversion factors generated by the computer code DOSFACTER, developed at ORNL; these dose conversion factors are used to estimate dose rates to persons due to radionuclides in the air or on the ground surface. The combination of the life table and dosimetric guidelines for the release of radioactive pollutants to the atmosphere, as required by the Clean Air Act Amendments of 1977.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil
International Nuclear Information System (INIS)
Zhang Xinbao; Long Yi; He Xiubin; Fu Jiexiong; Zhang Yunqi
2008-01-01
137 Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of 137 Cs fallout from atmosphere in 1963. 137 Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The 137 Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the 137 Cs depth distribution in profile, where the maximum 137 Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total 137 Cs fallout amount deposited on the earth surface in 1963 and the 137 Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of 137 Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the 137 Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different 137 Cs loss proportions of the reference inventory at the Kaixian site of the
DEFF Research Database (Denmark)
Petersen, L J; Petersen, J R; Talleruphuus, U
1999-01-01
The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea.......The purpose of the study was to compare the estimation of glomerular filtration rate (GFR) from 99mTc-DTPA renography with that estimated from the renal clearance of 51Cr-EDTA, creatinine and urea....
Fatal carotid blowout syndrome after BNCT for head and neck cancers
International Nuclear Information System (INIS)
Aihara, T.; Hiratsuka, J.; Ishikawa, H.; Kumada, H.; Ohnishi, K.; Kamitani, N.; Suzuki, M.; Sakurai, H.; Harada, T.
2015-01-01
Boron neutron capture therapy (BNCT) is high linear energy transfer (LET) radiation and tumor-selective radiation that does not cause serious damage to the surrounding normal tissues. BNCT might be effective and safe in patients with inoperable, locally advanced head and neck cancers, even those that recur at previously irradiated sites. However, carotid blowout syndrome (CBS) is a lethal complication resulting from malignant invasion of the carotid artery (CA); thus, the risk of CBS should be carefully assessed in patients with risk factors for CBS after BNCT. Thirty-three patients in our institution who underwent BNCT were analyzed. Two patients developed CBS and experienced widespread skin invasion and recurrence close to the carotid artery after irradiation. Careful attention should be paid to the occurrence of CBS if the tumor is located adjacent to the carotid artery. The presence of skin invasion from recurrent lesions after irradiation is an ominous sign of CBS onset and lethal consequences. - Highlights: • This study is fatal carotid blowout syndrome after BNCT for head and neck cancers. • Thirty-three patients in our institution who underwent BNCT were analyzed. • Two patients (2/33) developed CBS. • The presence of skin invasion from recurrent lesions after irradiation is an ominous sign of CBS. • We must be aware of these signs to perform BNCT safely.
Blowout Surge due to Interaction between a Solar Filament and Coronal Loops
Energy Technology Data Exchange (ETDEWEB)
Li, Haidong; Jiang, Yunchun; Yang, Jiayan; Yang, Bo; Xu, Zhe; Bi, Yi; Hong, Junchao; Chen, Hechao [Yunnan Observatories, Chinese Academy of Sciences, 396 Yangfangwang, Guandu District, Kunming, 650216 (China); Qu, Zhining, E-mail: lhd@ynao.ac.cn [Department of Physics, School of Science, Sichuan University of Science and Engineering, Zigong 643000 (China)
2017-06-20
We present an observation of the interaction between a filament and the outer spine-like loops that produces a blowout surge within one footpoint of large-scale coronal loops on 2015 February 6. Based the observation of the AIA 304 and 94 Å, the activated filament is initially embedded below a dome of a fan-spine configuration. Due to the ascending motion, the erupting filament reconnects with the outer spine-like field. We note that the material in the filament blows out along the outer spine-like field to form the surge with a wider spire, and a two-ribbon flare appears at the site of the filament eruption. In this process, small bright blobs appear at the interaction region and stream up along the outer spine-like field and down along the eastern fan-like field. As a result, a leg of the filament becomes radial and the material in it erupts, while another leg forms the new closed loops. Our results confirm that the successive reconnection occurring between the erupting filament and the coronal loops may lead to a strong thermal/magnetic pressure imbalance, resulting in a blowout surge.
A comparison of small-area hospitalisation rates, estimated morbidity and hospital access.
Shulman, H; Birkin, M; Clarke, G P
2015-11-01
Published data on hospitalisation rates tend to reveal marked spatial variations within a city or region. Such variations may simply reflect corresponding variations in need at the small-area level. However, they might also be a consequence of poorer accessibility to medical facilities for certain communities within the region. To help answer this question it is important to compare these variable hospitalisation rates with small-area estimates of need. This paper first maps hospitalisation rates at the small-area level across the region of Yorkshire in the UK to show the spatial variations present. Then the Health Survey of England is used to explore the characteristics of persons with heart disease, using chi-square and logistic regression analysis. Using the most significant variables from this analysis the authors build a spatial microsimulation model of morbidity for heart disease for the Yorkshire region. We then compare these estimates of need with the patterns of hospitalisation rates seen across the region. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Environmental radioactivity in the UK: the airborne geophysical view of dose rate estimates
International Nuclear Information System (INIS)
Beamish, David
2014-01-01
This study considers UK airborne gamma-ray data obtained through a series of high spatial resolution, low altitude surveys over the past decade. The ground concentrations of the naturally occurring radionuclides Potassium, Thorium and Uranium are converted to air absorbed dose rates and these are used to assess terrestrial exposure levels from both natural and technologically enhanced sources. The high resolution airborne information is also assessed alongside existing knowledge from soil sampling and ground-based measurements of exposure levels. The surveys have sampled an extensive number of the UK lithological bedrock formations and the statistical information provides examples of low dose rate lithologies (the formations that characterise much of southern England) to the highest sustained values associated with granitic terrains. The maximum dose rates (e.g. >300 nGy h −1 ) encountered across the sampled granitic terrains are found to vary by a factor of 2. Excluding granitic terrains, the most spatially extensive dose rates (>50 nGy h −1 ) are found in association with the Mercia Mudstone Group (Triassic argillaceous mudstones) of eastern England. Geological associations between high dose rate and high radon values are also noted. Recent studies of the datasets have revealed the extent of source rock (i.e. bedrock) flux attenuation by soil moisture in conjunction with the density and porosity of the temperate latitude soils found in the UK. The presence or absence of soil cover (and associated presence or absence of attenuation) appears to account for a range of localised variations in the exposure levels encountered. The hypothesis is supported by a study of an extensive combined data set of dose rates obtained from soil sampling and by airborne geophysical survey. With no attenuation factors applied, except those intrinsic to the airborne estimates, a bias to high values of between 10 and 15 nGy h −1 is observed in the soil data. A wide range of
Incorporation of radiometric tracers in peat and implications for estimating accumulation rates
Energy Technology Data Exchange (ETDEWEB)
Hansson, Sophia V., E-mail: sophia.hansson@emg.umu.se [Department of Ecology and Environmental Science, Umeå University, SE-901 87 Umeå (Sweden); Kaste, James M. [Geology Department, The College of William and Mary, Williamsburg, VA 23187 (United States); Olid, Carolina; Bindler, Richard [Department of Ecology and Environmental Science, Umeå University, SE-901 87 Umeå (Sweden)
2014-09-15
Accurate dating of peat accumulation is essential for quantitatively reconstructing past changes in atmospheric metal deposition and carbon burial. By analyzing fallout radionuclides {sup 210}Pb, {sup 137}Cs, {sup 241}Am, and {sup 7}Be, and total Pb and Hg in 5 cores from two Swedish peatlands we addressed the consequence of estimating accumulation rates due to downwashing of atmospherically supplied elements within peat. The detection of {sup 7}Be down to 18–20 cm for some cores, and the broad vertical distribution of {sup 241}Am without a well-defined peak, suggest some downward transport by percolating rainwater and smearing of atmospherically deposited elements in the uppermost peat layers. Application of the CRS age–depth model leads to unrealistic peat mass accumulation rates (400–600 g m{sup −2} yr{sup −1}), and inaccurate estimates of past Pb and Hg deposition rates and trends, based on comparisons to deposition monitoring data (forest moss biomonitoring and wet deposition). After applying a newly proposed IP-CRS model that assumes a potential downward transport of {sup 210}Pb through the uppermost peat layers, recent peat accumulation rates (200–300 g m{sup −2} yr{sup −1}) comparable to published values were obtained. Furthermore, the rates and temporal trends in Pb and Hg accumulation correspond more closely to monitoring data, although some off-set is still evident. We suggest that downwashing can be successfully traced using {sup 7}Be, and if this information is incorporated into age–depth models, better calibration of peat records with monitoring data and better quantitative estimates of peat accumulation and past deposition are possible, although more work is needed to characterize how downwashing may vary between seasons or years. - Highlights: • {sup 210}Pb, {sup 137}Cs, {sup 241}Am and {sup 7}Be, and tot-Pb and tot Hg were measured in 5 peat cores. • Two age–depth models were applied resulting in different accumulation rates
Estimated glomerular filtration rate in patients with type 2 diabetes mellitus
Directory of Open Access Journals (Sweden)
Paula Caitano Fontela
2014-12-01
Full Text Available Objective: to estimate the glomerular filtration using the Cockcroft-Gault (CG, Modification of Diet in Renal Disease (MDRD, and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equations, and serum creatinine in the screening of reduced renal function in patients with type two diabetes (T2DM enrolled in the Family Health Strategy (ESF, Brazilian federal health-care program. Methods: a cross-sectional descriptive and analytical study was conducted. The protocol consisted of sociodemographics, physical examination and biochemical tests. Renal function was analyzed through serum creatinine and glomerular filtration rate (GFR estimated according to the CG, MDRD and CKD-EPI equations, available on the websites of the Brazilian Nephrology Society (SBN and the (NKF. Results: 146 patients aged 60.9±8.9 years were evaluated; 64.4% were women. The prevalence of serum creatinine >1.2 mg/dL was 18.5% and GFR <60 mL/min/1.73m2 totaled 25.3, 36.3 and 34.2% when evaluated by the equations CG, MDRD and CKD-EPI, respectively. Diabetic patients with reduced renal function were older, had long-term T2DM diagnosis, higher systolic blood pressure and higher levels of fasting glucose, compared to diabetics with normal renal function. Creatinine showed strong negative correlation with the glomerular filtration rate estimated using CG, MDRD and CKD-EPI (-0.64, -0.87, -0.89 equations, respectively. Conclusion: the prevalence of individuals with reduced renal function based on serum creatinine was lower, reinforcing the need to follow the recommendations of the SBN and the National Kidney Disease Education Program (NKDEP in estimating the value of the glomerular filtration rate as a complement to the results of serum creatinine to better assess the renal function of patients.
Dose rate estimation of the Tohoku hynobiid salamander, Hynobius lichenatus, in Fukushima.
Fuma, Shoichi; Ihara, Sadao; Kawaguchi, Isao; Ishikawa, Takahiro; Watanabe, Yoshito; Kubota, Yoshihisa; Sato, Youji; Takahashi, Hiroyuki; Aono, Tatsuo; Ishii, Nobuyoshi; Soeda, Haruhi; Matsui, Kumi; Une, Yumi; Minamiya, Yukio; Yoshida, Satoshi
2015-05-01
The radiological risks to the Tohoku hynobiid salamanders (class Amphibia), Hynobius lichenatus due to the Fukushima Dai-ichi Nuclear Power Plant accident were assessed in Fukushima Prefecture, including evacuation areas. Aquatic egg clutches (n = 1 for each sampling date and site; n = 4 in total), overwintering larvae (n = 1-5 for each sampling date and site; n = 17 in total), and terrestrial juveniles or adults (n = 1 or 3 for each sampling date and site; n = 12 in total) of H. lichenatus were collected from the end of April 2011 to April 2013. Environmental media such as litter (n = 1-5 for each sampling date and site; n = 30 in total), soil (n = 1-8 for each sampling date and site; n = 31 in total), water (n = 1 for each sampling date and site; n = 17 in total), and sediment (n = 1 for each sampling date and site; n = 17 in total) were also collected. Activity concentrations of (134)Cs + (137)Cs were 1.9-2800, 0.13-320, and 0.51-220 kBq (dry kg) (-1) in the litter, soil, and sediment samples, respectively, and were 0.31-220 and <0.29-40 kBq (wet kg)(-1) in the adult and larval salamanders, respectively. External and internal absorbed dose rates to H. lichenatus were calculated from these activity concentration data, using the ERICA Assessment Tool methodology. External dose rates were also measured in situ with glass dosimeters. There was agreement within a factor of 2 between the calculated and measured external dose rates. In the most severely contaminated habitat of this salamander, a northern part of Abukuma Mountains, the highest total dose rates were estimated to be 50 and 15 μGy h(-1) for the adults and overwintering larvae, respectively. Growth and survival of H. lichenatus was not affected at a dose rate of up to 490 μGy h(-1) in the previous laboratory chronic gamma-irradiation experiment, and thus growth and survival of this salamander would not be affected, even in the most severely contaminated habitat in Fukushima Prefecture. However, further
Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach
International Nuclear Information System (INIS)
Cabal, Gonzalo; Kluson, Jaroslav
2008-01-01
Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method
International Nuclear Information System (INIS)
Hsi, C.-L.; Kuo, J.-T.
2008-01-01
Estimating solid residue gross burning rate and heating value burning in a power plant furnace is essential for adequate manipulation to achieve energy conversion optimization and plant performance. A model based on conservation equations of mass and thermal energy is established in this work to calculate the instantaneous gross burning rate and lower heating value of solid residue fired in a combustion chamber. Comparing the model with incineration plant control room data indicates that satisfactory predictions of fuel burning rates and heating values can be obtained by assuming the moisture-to-carbon atomic ratio (f/a) within the typical range from 1.2 to 1.8. Agreement between mass and thermal analysis and the bed-chemistry model is acceptable. The model would be useful for furnace fuel and air control strategy programming to achieve optimum performance in energy conversion and pollutant emission reduction
DEFF Research Database (Denmark)
Mocroft, Amanda; Kirk, Ole; Reiss, Peter
2010-01-01
with at least three serum creatinine measurements and corresponding body weight measurements from 2004 onwards. METHODS:: CKD was defined as either confirmed (two measurements >/=3 months apart) estimated glomerular filtration rate (eGFR) of 60 ml/min per 1.73 m or below for persons with baseline eGFR of above...... cumulative exposure to tenofovir [incidence rate ratio (IRR) per year 1.16, 95% CI 1.06-1.25, P ... increased rate of CKD. Consistent results were observed in wide-ranging sensitivity analyses, although of marginal statistical significance for lopinavir/r. No other antiretroviral dugs were associated with increased incidence of CKD. CONCLUSION:: In this nonrandomized large cohort, increasing exposure...
A method for estimating failure rates for low probability events arising in PSA
International Nuclear Information System (INIS)
Thorne, M.C.; Williams, M.M.R.
1995-01-01
The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary
Use of virtual reality to estimate radiation dose rates in nuclear plants
International Nuclear Information System (INIS)
Augusto, Silas C.; Mol, Antonio C.A.; Jorge, Carlos A.F.; Couto, Pedro M.
2007-01-01
Operators in nuclear plants receive radiation doses during several different operation procedures. A training program capable of simulating these operation scenarios will be useful in several ways, helping the planning of operational procedures so as to reduce the doses received by workers, and to minimize operations' times. It can provide safe virtual operation training, visualization of radiation dose rates, and estimation of doses received by workers. Thus, a virtual reality application, a free game engine, has been adapted to achieve the goals of this project. Simulation results for Argonauta research reactor of Instituto de Engenharia Nuclear are shown in this paper. A database of dose rate measurements, previously performed by the radiological protection service, has been used to display the dose rate distribution in the region of interest. The application enables the user to walk in the virtual scenario, displaying at all times the dose accumulated by the avatar. (author)
Plutonium Discharge Rates and Spent Nuclear Fuel Inventory Estimates for Nuclear Reactors Worldwide
Energy Technology Data Exchange (ETDEWEB)
Brian K. Castle; Shauna A. Hoiland; Richard A. Rankin; James W. Sterbentz
2012-09-01
This report presents a preliminary survey and analysis of the five primary types of commercial nuclear power reactors currently in use around the world. Plutonium mass discharge rates from the reactors’ spent fuel at reload are estimated based on a simple methodology that is able to use limited reactor burnup and operational characteristics collected from a variety of public domain sources. Selected commercial reactor operating and nuclear core characteristics are also given for each reactor type. In addition to the worldwide commercial reactors survey, a materials test reactor survey was conducted to identify reactors of this type with a significant core power rating. Over 100 material or research reactors with a core power rating >1 MW fall into this category. Fuel characteristics and spent fuel inventories for these material test reactors are also provided herein.
Estimating the erosion and deposition rates in a small watershed by the 137Cs tracing method
International Nuclear Information System (INIS)
Li Mian; Li Zhanbin; Yao Wenyi; Liu Puling
2009-01-01
Understanding the erosion and deposition rates in a small watershed is important for designing soil and water conservation measures. The objective of this study is to estimate the net soil loss and gain at points with various land use types and landform positions in a small watershed in the Sichuan Hilly Basin of China by the 137 Cs tracing technique. Among various land use types, the order of erosion rate was bare rock > sloping cultivated land > forest land. The paddy field and Caotu (a kind of cultivated land located at the foot of hills) were depositional areas. The erosion rate under different landform was in this order: hillside > saddle > hilltop. The footslope and the valley were depositional areas. The 137 Cs technique was shown to provide an effective means of documenting the spatial distribution of soil erosion and deposition within the small watershed
Trends in pregnancies and pregnancy rates by outcome: estimates for the United States, 1976-96.
Ventura, S J; Mosher, W D; Curtin, S C; Abma, J C; Henshaw, S
2000-01-01
This report presents national estimates of pregnancies and pregnancy rates according to women's age, race, and Hispanic origin, and by marital status, race, and Hispanic origin. Data are presented for 1976-96. Data from the National Survey of Family Growth (NSFG) are used to show information on sexual activity, contraceptive practices, and infertility, as well as women's reports of pregnancy intentions. Tables of pregnancy rates and the factors affecting pregnancy rates are presented and interpreted. Birth data are from the birth-registration system for all births registered in the United States and reported by State health departments to NCHS; abortion data are from The Alan Guttmacher Institute (AGI) and the National Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and Prevention (CDC); and fetal loss data are from pregnancy history information collected in the NSFG. In 1996 an estimated 6.24 million pregnancies resulted in 3.89 million live births, 1.37 million induced abortions, and 0.98 million fetal losses. The pregnancy rate in 1996 was 104.7 pregnancies per 1,000 women aged 15-44 years, 9 percent lower than in 1990 (115.6), and the lowest recorded since 1976 (102.7). Since 1990 rates have dropped 8 percent for live births, 16 percent for induced abortions, and 4 percent for fetal losses. The teenage pregnancy rate has declined considerably in the 1990's, falling 15 percent from its 1991 high of 116.5 per 1,000 women aged 15-19 years to 98.7 in 1996. Among the factors accounting for this decline are decreased sexual activity, increases in condom use, and the adoption of the injectable and implant contraceptives.
Adrion, Jeffrey R.; Song, Michael J.; Schrider, Daniel R.; Hahn, Matthew W.
2017-01-01
Abstract Knowing the rate at which transposable elements (TEs) insert and delete is critical for understanding their role in genome evolution. We estimated spontaneous rates of insertion and deletion for all known, active TE superfamilies present in a set of Drosophila melanogaster mutation-accumulation (MA) lines using whole genome sequence data. Our results demonstrate that TE insertions far outpace TE deletions in D. melanogaster. We found a significant effect of background genotype on TE activity, with higher rates of insertions in one MA line. We also found significant rate heterogeneity between the chromosomes, with both insertion and deletion rates elevated on the X relative to the autosomes. Further, we identified significant associations between TE activity and chromatin state, and tested for associations between TE activity and other features of the local genomic environment such as TE content, exon content, GC content, and recombination rate. Our results provide the most detailed assessment of TE mobility in any organism to date, and provide a useful benchmark for both addressing theoretical predictions of TE dynamics and for exploring large-scale patterns of TE movement in D. melanogaster and other species. PMID:28338986
Wavelet denoising method; application to the flow rate estimation for water level control
International Nuclear Information System (INIS)
Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun
2003-01-01
The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers
Karakas, Filiz; Imamoglu, Ipek
2017-02-15
This study aims to estimate anaerobic dechlorination rate constants (k m ) of reactions of individual PCB congeners using data from four laboratory microcosms set up using sediment from Baltimore Harbor. Pathway k m values are estimated by modifying a previously developed model as Anaerobic Dehalogenation Model (ADM) which can be applied to any halogenated hydrophobic organic (HOC). Improvements such as handling multiple dechlorination activities (DAs) and co-elution of congeners, incorporating constraints, using new goodness of fit evaluation led to an increase in accuracy, speed and flexibility of ADM. DAs published in the literature in terms of chlorine substitutions as well as specific microorganisms and their combinations are used for identification of pathways. The best fit explaining the congener pattern changes was found for pathways of Phylotype DEH10, which has the ability to remove doubly flanked chlorines in meta and para positions, para flanked chlorines in meta position. The range of estimated k m values is between 0.0001-0.133d -1 , the median of which is found to be comparable to the few available published biologically confirmed rate constants. Compound specific modelling studies such as that performed by ADM can enable monitoring and prediction of concentration changes as well as toxicity during bioremediation. Copyright © 2016 Elsevier B.V. All rights reserved.
Deep-sea benthic footprint of the deepwater horizon blowout.
Directory of Open Access Journals (Sweden)
Paul A Montagna
Full Text Available The Deepwater Horizon (DWH accident in the northern Gulf of Mexico occurred on April 20, 2010 at a water depth of 1525 meters, and a deep-sea plume was detected within one month. Oil contacted and persisted in parts of the bottom of the deep-sea in the Gulf of Mexico. As part of the response to the accident, monitoring cruises were deployed in fall 2010 to measure potential impacts on the two main soft-bottom benthic invertebrate groups: macrofauna and meiofauna. Sediment was collected using a multicorer so that samples for chemical, physical and biological analyses could be taken simultaneously and analyzed using multivariate methods. The footprint of the oil spill was identified by creating a new variable with principal components analysis where the first factor was indicative of the oil spill impacts and this new variable mapped in a geographic information system to identify the area of the oil spill footprint. The most severe relative reduction of faunal abundance and diversity extended to 3 km from the wellhead in all directions covering an area about 24 km(2. Moderate impacts were observed up to 17 km towards the southwest and 8.5 km towards the northeast of the wellhead, covering an area 148 km(2. Benthic effects were correlated to total petroleum hydrocarbon, polycyclic aromatic hydrocarbons and barium concentrations, and distance to the wellhead; but not distance to hydrocarbon seeps. Thus, benthic effects are more likely due to the oil spill, and not natural hydrocarbon seepage. Recovery rates in the deep sea are likely to be slow, on the order of decades or longer.
Directory of Open Access Journals (Sweden)
Trinkley KE
2014-05-01
Full Text Available Katy E Trinkley,1 S Michelle Nikels,2 Robert L Page II,1 Melanie S Joy11Skaggs School of Pharmacy and Pharmaceutical Sciences, 2School of Medicine, University of Colorado, Aurora, CO, USA Objective: The purpose of this paper is to serve as a review for primary care providers on the bedside methods for estimating glomerular filtration rate (GFR for dosing and chronic kidney disease (CKD staging and to discuss how automated health information technologies (HIT can enhance clinical documentation of staging and reduce medication errors in patients with CKD.Methods: A nonsystematic search of PubMed (through March 2013 was conducted to determine the optimal approach to estimate GFR for dosing and CKD staging and to identify examples of how automated HITs can improve health outcomes in patients with CKD. Papers known to the authors were included, as were scientific statements. Articles were chosen based on the judgment of the authors.Results: Drug-dosing decisions should be based on the method used in the published studies and package labeling that have been determined to be safe, which is most often the Cockcroft–Gault formula unadjusted for body weight. Although Modification of Diet in Renal Disease is more commonly used in practice for staging, the CKD–Epidemiology Collaboration (CKD–EPI equation is the most accurate formula for estimating the CKD staging, especially at higher GFR values. Automated HITs offer a solution to the complexity of determining which equation to use for a given clinical scenario. HITs can educate providers on which formula to use and how to apply the formula in a given clinical situation, ultimately improving appropriate medication and medical management in CKD patients.Conclusion: Appropriate estimation of GFR is key to optimal health outcomes. HITs assist clinicians in both choosing the most appropriate GFR estimation formula and in applying the results of the GFR estimation in practice. Key limitations of the
International Nuclear Information System (INIS)
Martz, H.F.; Beckman, R.J.
1981-12-01
Probabilistic risk analyses are used to assess the risks inherent in the operation of existing and proposed nuclear power reactors. In performing such risk analyses the failure rates of various components which are used in a variety of reactor systems must be estimated. These failure rate estimates serve as input to fault trees and event trees used in the analyses. Component failure rate estimation is often based on relevant field failure data from different reliability data sources such as LERs, NPRDS, and the In-Plant Data Program. Various statistical data analysis and estimation methods have been proposed over the years to provide the required estimates of the component failure rates. This report discusses the basis and extent to which statistical methods can be used to obtain component failure rate estimates. The report is expository in nature and focuses on the general philosophical basis for such statistical methods. Various terms and concepts are defined and illustrated by means of numerous simple examples
Beaulieu, Jeremy M; O'Meara, Brian C; Crane, Peter; Donoghue, Michael J
2015-09-01
Dating analyses based on molecular data imply that crown angiosperms existed in the Triassic, long before their undisputed appearance in the fossil record in the Early Cretaceous. Following a re-analysis of the age of angiosperms using updated sequences and fossil calibrations, we use a series of simulations to explore the possibility that the older age estimates are a consequence of (i) major shifts in the rate of sequence evolution near the base of the angiosperms and/or (ii) the representative taxon sampling strategy employed in such studies. We show that both of these factors do tend to yield substantially older age estimates. These analyses do not prove that younger age estimates based on the fossil record are correct, but they do suggest caution in accepting the older age estimates obtained using current relaxed-clock methods. Although we have focused here on the angiosperms, we suspect that these results will shed light on dating discrepancies in other major clades. ©The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Sonia Aïssa
2008-05-01
Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.
Montañés Bermúdez, R; Gràcia Garcia, S; Fraga Rodríguez, G M; Escribano Subias, J; Diez de Los Ríos Carrasco, M J; Alonso Melgar, A; García Nieto, V
2014-05-01
The appearance of the K/DOQI guidelines in 2002 on the definition, evaluation and staging of chronic kidney disease (CKD) have led to a major change in how to assess renal function in adults and children. These guidelines, recently updated, recommended that the study of renal function is based, not only on measuring the serum creatinine concentration, but this must be accompanied by the estimation of glomerular filtration rate (GFR) obtained by an equation. However, the implementation of this recommendation in the clinical laboratory reports in the paediatric population has been negligible. Numerous studies have appeared in recent years on the importance of screening and monitoring of patients with CKD, the emergence of new equations for estimating GFR, and advances in clinical laboratories regarding the methods for measuring plasma creatinine and cystatin C, determined by the collaboration between the departments of paediatrics and clinical laboratories to establish recommendations based on the best scientific evidence on the use of equations to estimate GFR in this population. The purpose of this document is to provide recommendations on the evaluation of renal function and the use of equations to estimate GFR in children from birth to 18 years of age. The recipients of these recommendations are paediatricians, nephrologists, clinical biochemistry, clinical analysts, and all health professionals involved in the study and evaluation of renal function in this group of patients. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K
2013-11-01
This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, aged<1 yr old, largely fluctuated, presumably due to the rapid ruminal growth and dynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.
Estimating energy expenditure from heart rate in older adults: a case for calibration.
Schrack, Jennifer A; Zipunnikov, Vadim; Goldsmith, Jeff; Bandeen-Roche, Karen; Crainiceanu, Ciprian M; Ferrucci, Luigi
2014-01-01
Accurate measurement of free-living energy expenditure is vital to understanding changes in energy metabolism with aging. The efficacy of heart rate as a surrogate for energy expenditure is rooted in the assumption of a linear function between heart rate and energy expenditure, but its validity and reliability in older adults remains unclear. To assess the validity and reliability of the linear function between heart rate and energy expenditure in older adults using different levels of calibration. Heart rate and energy expenditure were assessed across five levels of exertion in 290 adults participating in the Baltimore Longitudinal Study of Aging. Correlation and random effects regression analyses assessed the linearity of the relationship between heart rate and energy expenditure and cross-validation models assessed predictive performance. Heart rate and energy expenditure were highly correlated (r=0.98) and linear regardless of age or sex. Intra-person variability was low but inter-person variability was high, with substantial heterogeneity of the random intercept (s.d. =0.372) despite similar slopes. Cross-validation models indicated individual calibration data substantially improves accuracy predictions of energy expenditure from heart rate, reducing the potential for considerable measurement bias. Although using five calibration measures provided the greatest reduction in the standard deviation of prediction errors (1.08 kcals/min), substantial improvement was also noted with two (0.75 kcals/min). These findings indicate standard regression equations may be used to make population-level inferences when estimating energy expenditure from heart rate in older adults but caution should be exercised when making inferences at the individual level without proper calibration.
International Nuclear Information System (INIS)
Blaylock, B.G.; Frank, M.L.; O'Neal, B.R.
1993-08-01
The purpose of this report is to present a methodology for evaluating the potential for aquatic biota to incur effects from exposure to chronic low-level radiation in the environment. Aquatic organisms inhabiting an environment contaminated with radioactivity receive external radiation from radionuclides in water, sediment, and from other biota such as vegetation. Aquatic organisms receive internal radiation from radionuclides ingested via food and water and, in some cases, from radionuclides absorbed through the skin and respiratory organs. Dose rate equations, which have been developed previously, are presented for estimating the radiation dose rate to representative aquatic organisms from alpha, beta, and gamma irradiation from external and internal sources. Tables containing parameter values for calculating radiation doses from selected alpha, beta, and gamma emitters are presented in the appendix to facilitate dose rate calculations. The risk of detrimental effects to aquatic biota from radiation exposure is evaluated by comparing the calculated radiation dose rate to biota to the U.S. Department of Energy's (DOE's) recommended dose rate limit of 0.4 mGy h -1 (1 rad d -1 ). A dose rate no greater than 0.4 mGy h -1 to the most sensitive organisms should ensure the protection of populations of aquatic organisms. DOE's recommended dose rate is based on a number of published reviews on the effects of radiation on aquatic organisms that are summarized in the National Council on Radiation Protection and Measurements Report No. 109 (NCRP 1991). DOE recommends that if the results of radiological models or dosimetric measurements indicate that a radiation dose rate of 0. 1 mGy h -1 will be exceeded, then a more detailed evaluation of the potential ecological consequences of radiation exposure to endemic populations should be conducted
Estimation of uranium resources by life-cycle or discovery-rate models: a critique
International Nuclear Information System (INIS)
Harris, D.P.
1976-10-01
This report was motivated primarily by M. A. Lieberman's ''United States Uranium Resources: An Analysis of Historical Data'' (Science, April 30). His conclusion that only 87,000 tons of U 3 O 8 resources recoverable at a forward cost of $8/lb remain to be discovered is criticized. It is shown that there is no theoretical basis for selecting the exponential or any other function for the discovery rate. Some of the economic (productivity, inflation) and data issues involved in the analysis of undiscovered, recoverable U 3 O 8 resources on discovery rates of $8 reserves are discussed. The problem of the ratio of undiscovered $30 resources to undiscovered $8 resources is considered. It is concluded that: all methods for the estimation of unknown resources must employ a model of some form of the endowment-exploration-production complex, but every model is a simplification of the real world, and every estimate is intrinsically uncertain. The life-cycle model is useless for the appraisal of undiscovered, recoverable U 3 O 8 , and the discovery rate model underestimates these resources
Analysis of the wind data and estimation of the resultant air concentration rates
International Nuclear Information System (INIS)
Hu, Shze Jer; Katagiri, Hiroshi; Kobayashi, Hideo
1988-09-01
Statistical analyses and comparisons of the meteorological wind data obtained by the propeller and supersonic anemometers for the year of 1987 in the Japan Atomic Energy Research Institute, Tokai, were performed. For wind speeds less than 1 m/s, the propeller readings are generally 0.5 m/s less than those of the supersonic readings. The resultant average air concentration and ground level γ exposure rates due to the radioactive releases for the normal operation of a nuclear plant are over-estimated when calculated using the propeller wind data. As supersonic anemometer can give accurate wind speed to as low as 0.01 m/s, it should be used to measure the low wind speed. The difference in the average air concentrations and γ exposure rates calculated using the two different sets of wind data, is due to the influence of low wind speeds at calm. If the number at calm is large, actual low wind speeds and wind directions should be used in the statistical analysis of atmospheric dispersion to give a more accurate and realistic estimation of the air concentrations and γ exposure rates due to the normal operation of a nuclear plant. (author). 4 refs, 3 figs, 9 tabs
Deterministic estimation of crack growth rates in steels in LWR coolant environments
International Nuclear Information System (INIS)
Macdonald, D.D.; Lu, P.C.; Urquidi-Macdonald, M.
1995-01-01
In this paper, the authors extend the coupled environment fracture model (CEFM) for intergranular stress corrosion cracking (IGSCC) of sensitized Type 304SS in light water reactor heat transport circuits by incorporating steel corrosion, the oxidation of hydrogen, and the reduction of hydrogen peroxide, in addition to the reduction of oxygen (as in the original CEFM), as charge transfer reactions occurring on the external surfaces. Additionally, the authors have incorporated a theoretical approach for estimating the crack tip strain rate, and the authors have included a void nucleation model to account for ductile failure at very negative potentials. The key concept of the CEFM is that coupling between the internal and external environments, and the need to conserve charge, are the physical and mathematical constraints that determine the rate of crack advance. The model provides rational explanations for the effects of oxygen, hydrogen peroxide, hydrogen, conductivity, stress intensity, and flow velocity on the rate of crack growth in sensitized Type 304 in simulated LWR in-vessel environments. They propose that the CEFM can serve as the basis of a deterministic method for estimating component life times
The Greenville Fault: preliminary estimates of its long-term creep rate and seismic potential
Lienkaemper, James J.; Barry, Robert G.; Smith, Forrest E.; Mello, Joseph D.; McFarland, Forrest S.
2013-01-01
Once assumed locked, we show that the northern third of the Greenville fault (GF) creeps at 2 mm/yr, based on 47 yr of trilateration net data. This northern GF creep rate equals its 11-ka slip rate, suggesting a low strain accumulation rate. In 1980, the GF, easternmost strand of the San Andreas fault system east of San Francisco Bay, produced a Mw5.8 earthquake with a 6-km surface rupture and dextral slip growing to ≥2 cm on cracks over a few weeks. Trilateration shows a 10-cm post-1980 transient slip ending in 1984. Analysis of 2000-2012 crustal velocities on continuous global positioning system stations, allows creep rates of ~2 mm/yr on the northern GF, 0-1 mm/yr on the central GF, and ~0 mm/yr on its southern third. Modeled depth ranges of creep along the GF allow 5-25% aseismic release. Greater locking in the southern two thirds of the GF is consistent with paleoseismic evidence there for large late Holocene ruptures. Because the GF lacks large (>1 km) discontinuities likely to arrest higher (~1 m) slip ruptures, we expect full-length (54-km) ruptures to occur that include the northern creeping zone. We estimate sufficient strain accumulation on the entire GF to produce Mw6.9 earthquakes with a mean recurrence of ~575 yr. While the creeping 16-km northern part has the potential to produce a Mw6.2 event in 240 yr, it may rupture in both moderate (1980) and large events. These two-dimensional-model estimates of creep rate along the southern GF need verification with small aperture surveys.
Hesp, Patrick A.; Hilton, Michael; Konlecher, Teresa
2017-10-01
This study is the first to simultaneously compare flow and sediment transport through a blowout and over an adjacent foredune, and the first study of flow within a highly sinuous, slot and cauldron blowout. Flow across the foredune transect is similar to that observed in other studies and is primarily modulated by across-dune vegetation density differences. Flow within the blowout is highly complex and exhibits pronounced accelerations and jet flow. It is characterised by marked helicoidal coherent vortices in the mid-regions, and topographically vertically forced flow out of the cauldron portion of the blowout. Instantaneous sediment transport within the blowout is significant compared to transport onto and/or over the adjacent foredune stoss slope and ridge, with the blowout providing a conduit for suspended sediment to reach the downwind foredune upper stoss slope and crest. Medium term (4 months) aeolian sedimentation data indicates sand is accumulating in the blowout entrance while erosion is taking place throughout the majority of the slot, and deposition is occurring downwind of the cauldron on the foredune ridge. The adjacent lower stoss slope of the foredune is accreting while the upper stoss slope is slightly erosional. Longer term (16 months) pot trap data shows that the majority of foredune upper stoss slope and crest accretion occurs via suspended sediment delivery from the blowout, whereas the majority of the suspended sediment arriving to the well-vegetated foredune stoss slope is deposited on the mid-stoss slope. The results of this study indicate one mechanism of how marked alongshore foredune morphological variability evolves due to the role of blowouts in topographically accelerating flow, and delivering significant aeolian sediment downwind to relatively discrete sections of the foredune.
Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data
Directory of Open Access Journals (Sweden)
M. Benghanem
2018-03-01
Full Text Available This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia. The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed. Keywords: Photovoltaic water pumping system, Solar radiation data, Simulation, Flow rate
Directory of Open Access Journals (Sweden)
Cristiane Fagundes
2013-03-01
Full Text Available In this study, the influence of storage temperature and passive modified packaging (PMP on the respiration rate and physicochemical properties of fresh-cut Gala apples (Malus domestica B. was investigated. The samples were packed in flexible multilayer bags and stored at 2 °C, 5 °C, and 7 °C for eleven days. Respiration rate as a function of CO2 and O2 concentrations was determined using gas chromatography. The inhibition parameters were estimated using a mathematical model based on Michaelis-Menten equation. The following physicochemical properties were evaluated: total soluble solids, pH, titratable acidity, and reducing sugars. At 2 °C, the maximum respiration rate was observed after 150 hours. At 5 °C and 7 °C the maximum respiration rates were observed after 100 and 50 hours of storage, respectively. The inhibition model results obtained showed a clear effect of CO2 on O2 consumption. The soluble solids decreased, although not significantly, during storage at the three temperatures studied. Reducing sugars and titratable acidity decreased during storage and the pH increased. These results indicate that the respiration rate influenced the physicochemical properties.
Basic study on relationship between estimated rate constants and noise in FDG kinetic analysis
International Nuclear Information System (INIS)
Kimura, Yuichi; Toyama, Hinako; Senda, Michio.
1996-01-01
For accurate estimation of the rate constants in 18 F-FDG dynamic study, the shape of the estimation function (Φ) is crucial. In this investigation, the relationship between the noise level in tissue time activity curve and the shape of the least squared estimation function which is the sum of squared error between a function of model parameters and a measured data is calculated in 3 parameter model of 18 F-FDG. In the first simulation, by using actual plasma time activity curve, the true tissue curve was generated from known sets of rate constants ranging 0.05≤k 1 ≤0.15, 0.1≤k 2 ≤0.2 and 0.01≤k 3 ≤0.1 in 0.01 step. This procedure was repeated under various noise levels in the tissue time activity curve from 1 to 8% of the maximum value in the tissue activity. In the second simulation, plasma and tissue time activity curves from clinical 18 F-FDG dynamic study were used to calculate the Φ. In the noise-free case, because the global minima is separated from neighboring local minimums, it was easy to find out the optimum point. However, with increasing noise level, the optimum point was buried in many neighboring local minima. Making it difficult to find out the optimum point. The optimum point was found within 20% of the convergence point by standard non-linear optimization method. The shape of Φ for the clinical data was similar to that with the noise level of 3 or 5% in the first simulation. Therefore direct search within the area extending 20% from the result of usual non-linear curve fitting procedure is recommended for accurate estimation of the constants. (author)
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-04-01
Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of -5.6 to 5.2 bpm and a bias of -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.
Candela-Toha, Ángel; Pardo, María Carmen; Pérez, Teresa; Muriel, Alfonso; Zamora, Javier
2018-04-20
and objective Acute kidney injury (AKI) diagnosis is still based on serum creatinine and diuresis. However, increases in creatinine are typically delayed 48h or longer after injury. Our aim was to determine the utility of routine postoperative renal function blood tests, to predict AKI one or 2days in advance in a cohort of cardiac surgery patients. Using a prospective database, we selected a sample of patients who had undergone major cardiac surgery between January 2002 and December 2013. The ability of the parameters to predict AKI was based on Acute Kidney Injury Network serum creatinine criteria. A cohort of 3,962 cases was divided into 2groups of similar size, one being exploratory and the other a validation sample. The exploratory group was used to show primary objectives and the validation group to confirm results. The ability to predict AKI of several kidney function parameters measured in routine postoperative blood tests, was measured with time-dependent ROC curves. The primary endpoint was time from measurement to AKI diagnosis. AKI developed in 610 (30.8%) and 623 (31.4%) patients in the exploratory and validation samples, respectively. Estimated glomerular filtration rate using the MDRD-4 equation showed the best AKI prediction capacity, with values for the AUC ROC curves between 0.700 and 0.946. We obtained different cut-off values for estimated glomerular filtration rate depending on the degree of AKI severity and on the time elapsed between surgery and parameter measurement. Results were confirmed in the validation sample. Postoperative estimated glomerular filtration rate using the MDRD-4 equation showed good ability to predict AKI following cardiac surgery one or 2days in advance. Copyright © 2018 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.
Dynamic temperature estimation and real time emergency rating of transmission cables
DEFF Research Database (Denmark)
Olsen, R. S.; Holboll, J.; Gudmundsdottir, Unnur Stella
2012-01-01
enables real time emergency ratings, such that the transmission system operator can make well-founded decisions during faults. Hereunder is included the capability of producing high resolution loadability vs. time schedules within few minutes, such that the TSO can safely control the system.......). It is found that the calculated temperature estimations are fairly accurate — within 1.5oC of the finite element method (FEM) simulation to which it is compared — both when looking at the temperature profile (time dependent) and the temperature distribution (geometric dependent). The methodology moreover...
Czech Academy of Sciences Publication Activity Database
Šmíd, Martin
2009-01-01
Roč. 165, č. 1 (2009), s. 29-45 ISSN 0254-5330 R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : multistage stochastic programming problems * approximation * discretization * Monte Carlo Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.961, year: 2009 http://library.utia.cas.cz/separaty/2008/E/smid-the expected loss in the discretization of multistage stochastic programming problems - estimation and convergence rate.pdf
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Directory of Open Access Journals (Sweden)
J. Szilagyi
2009-05-01
Full Text Available Under simplifying conditions catchment-scale vapor pressure at the drying land surface can be calculated as a function of its watershed-representative temperature (<T_{s}> by the wet-surface equation (WSE, similar to the wet-bulb equation in meteorology for calculating the dry-bulb thermometer vapor pressure of the Complementary Relationship of evaporation. The corresponding watershed ET rate,
International Nuclear Information System (INIS)
Klavano, G.G.; Christian, R.G.
1992-01-01
A survey was conducted after the Lodgepole sour gas well blowout of October 1982 to determine if the incident changed the number and type of bovine abortions and abnormal bovine feti submitted to the diagnostic laboratory from the blowout area. The records of the total number of bovine feti submitted were compared between three areas to determine if there was a significant difference between the areas closer to the well site and the larger total area. No changes or trends could be ascribed to the well blowout. 2 refs., 5 tabs
Wang, Qiang
2015-07-22
The blow-out limits of nonpremixed turbulent jet flames in cross flows were studied, especially concerning the effect of ambient pressure, by conducting experiments at atmospheric and sub-atmospheric pressures. The combined effects of air flow and pressure were investigated by a series of experiments conducted in an especially built wind tunnel in Lhasa, a city on the Tibetan plateau where the altitude is 3650 m and the atmospheric pressure condition is naturally low (64 kPa). These results were compared with results obtained from a wind tunnel at standard atmospheric pressure (100 kPa) in Hefei city (altitude 50 m). The size of the fuel nozzles used in the experiments ranged from 3 to 8 mm in diameter and propane was used as the fuel. It was found that the blow-out limit of the air speed of the cross flow first increased (“cross flow dominant” regime) and then decreased (“fuel jet dominant” regime) as the fuel jet velocity increased in both pressures; however, the blow-out limit of the air speed of the cross flow was much lower at sub-atmospheric pressure than that at standard atmospheric pressure whereas the domain of the blow-out limit curve (in a plot of the air speed of the cross flow versus the fuel jet velocity) shrank as the pressure decreased. A theoretical model was developed to characterize the blow-out limit of nonpremixed jet flames in a cross flow based on a Damköhler number, defined as the ratio between the mixing time and the characteristic reaction time. A satisfactory correlation was obtained at relative strong cross flow conditions (“cross flow dominant” regime) that included the effects of the air speed of the cross flow, fuel jet velocity, nozzle diameter and pressure.
A new estimate of the impact of OSHA inspections on manufacturing injury rates, 1998-2005.
Haviland, Amelia M; Burns, Rachel M; Gray, Wayne B; Ruder, Teague; Mendeloff, John
2012-11-01
A prior study indicated that the effect of OSHA inspections on lost workday injuries had declined from 1979 through 1998. This study provides an updated estimate for 1998-2005. Injury data from the Pennsylvania workers' compensation program were linked with employment data from unemployment compensation records to calculate lost-time rates for single-establishment manufacturing firms with more than 10 employees. These rates were linked to OSHA inspection findings. The RAND Human Subjects Protection Committee determined that this study was exempt from review. Inspections with penalties reduced injuries by an average of 19-24% annually in the 2 years following the inspection. These effects were not found for workplaces with fewer than 20 or more than 250 employees or for inspections without penalties. These findings should be generalizable to the 29 states where federal OSHA directly enforces standards. They suggest that the impact of inspections has increased from the 1990s. Copyright © 2012 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Ohmori, Naoki; Ashida, Kenji; Fujita, Osamu
2003-01-01
Because the glandular content rate is an important factor in evaluating breast cancer detection and average glandular dose, it is important in mammography research to estimate and analyze this rate. The purpose of this study was to obtain a formula for statistical estimation of the glandular content rate, to clarify statistically the influence of age group and compressed breast thickness (CBT) on estimating the glandular content rate, and to show statistically the general relation between glandular content rate and the factors of age and CBT. The subjects were 740 Japanese women aged 20-91 years (mean±SD: 48.3±12.8 years) who had undergone mammography. In our study, the glandular content rate was statistically estimated from age group, mAs-value, and CBT when subjects underwent mammography, from a phantom simulation, and from MR images of the breast. In addition, multivariate analysis was carried to examine statistically the influence of age group and CBT on glandular content rate. The mean glandular content rate as estimated by age group was as follows: 35.6% for those in their 20s, 33.4% in the 30s, 27.5% in the 40s, 23.8% in the 50s, and 21.8% in those 60 and over. The rate for the subjects as a whole was 27.1%. This study indicated that overestimation occurred if the estimated value of the glandular content rate was not corrected in the 3D-measurement by MRI. In addition, this study showed that the statistical influence on glandular content rate was significantly larger for CBT than age. (author)
Estimation of daily flow rate of photovoltaic water pumping systems using solar radiation data
Benghanem, M.; Daffallah, K. O.; Almohammedi, A.
2018-03-01
This paper presents a simple model which allows us to contribute in the studies of photovoltaic (PV) water pumping systems sizing. The nonlinear relation between water flow rate and solar power has been obtained experimentally in a first step and then used for performance prediction. The model proposed enables us to simulate the water flow rate using solar radiation data for different heads (50 m, 60 m, 70 m and 80 m) and for 8S × 3P PV array configuration. The experimental data are obtained with our pumping test facility located at Madinah site (Saudi Arabia). The performances are calculated using the measured solar radiation data of different locations in Saudi Arabia. Knowing the solar radiation data, we have estimated with a good precision the water flow rate Q in five locations (Al-Jouf, Solar Village, AL-Ahsa, Madinah and Gizan) in Saudi Arabia. The flow rate Q increases with the increase of pump power for different heads following the nonlinear model proposed.
Modified Exponential (MOE) Models: statistical Models for Risk Estimation of Low dose Rate Radiation
International Nuclear Information System (INIS)
Ogata, H.; Furukawa, C.; Kawakami, Y.; Magae, J.
2004-01-01
Simultaneous inclusion of dose and dose-rate is required to evaluate the risk of long term irradiation at low dose-rates, since biological responses to radiation are complex processes that depend both on irradiation time and total dose. Consequently, it is necessary to consider a model including cumulative dose,dose-rate and irradiation time to estimate quantitative dose-response relationship on the biological response to radiation. In this study, we measured micronucleus formation and (3H) thymidine uptake in U2OS, human osteosarcoma cell line, as indicators of biological response to gamma radiation. Cells were exposed to gamma ray in irradiation room bearing 50,000 Ci 60Co. After irradiation, they were cultured for 24h in the presence of cytochalasin B to block cytokinesis, and cytoplasm and nucleus were stained with DAPI and propidium iodide. The number of binuclear cells bearing a micronucleus was counted under a florescence microscope. For proliferation inhibition, cells were cultured for 48 h after the irradiation and (3h) thymidine was pulsed for 4h before harvesting. We statistically analyzed the data for quantitative evaluation of radiation risk at low dose/dose-rate. (Author)
Real-time data for estimating a forward-looking interest rate rule of the ECB.
Bletzinger, Tilman; Wieland, Volker
2017-12-01
The purpose of the data presented in this article is to use it in ex post estimations of interest rate decisions by the European Central Bank (ECB), as it is done by Bletzinger and Wieland (2017) [1]. The data is of quarterly frequency from 1999 Q1 until 2013 Q2 and consists of the ECB's policy rate, inflation rate, real output growth and potential output growth in the euro area. To account for forward-looking decision making in the interest rate rule, the data consists of expectations about future inflation and output dynamics. While potential output is constructed based on data from the European Commission's annual macro-economic database, inflation and real output growth are taken from two different sources both provided by the ECB: the Survey of Professional Forecasters and projections made by ECB staff. Careful attention was given to the publication date of the collected data to ensure a real-time dataset only consisting of information which was available to the decision makers at the time of the decision.
Sedimentation rate estimates in Sorsogon Bay, Philippines using 210Pb method
International Nuclear Information System (INIS)
Madrid, Jordan F.; Sta. Maria, Efren J.; Olivares, Ryan U.; Aniago, Ryan Joseph; Asa Anie Day DC; Dayaon, Jennyvi P.; Bulos, Adelina DM; Sombrito, Elvira Z.
2011-01-01
Sorsogon Bay has experienced a long history of recurring harmful algal blooms over the past few years. In an attempt to establish a chronology of events in the sediment layer, lead-210 ( 210 Pb) dating method has been utilized in estimating sedimentation rates from three selected areas along the bay. Based on the unsupported 210 Pb data and by applying the Constant Initial Concentration (CIC) model, the calculated sedimentation rates were 0.8, 1.3 and 1.8 cm yr 1 for sediment cores collected near the coastal areas of Castilla (SO-01), Sorsogon City (SO-07) and Cadacan River (SO-03), respectively. High sedimentation rates were measured in sediment cores believed to be affected from frequent volcanic ash releases and from areas near human settlement combined with intensive farming and agricultural activities. The collected sediments exhibited non-uniform down core values of dry bulk density and moisture content. This variation in measurements may indicate the general quality and composition of the sediment samples, i.e., amount of organic matter and grain size. The calculated sedimentation rates obtained provided an overview of the sedimentation processes and reflect the land use pattern around the bay which may help in understanding the history and distribution of materials and nutrient input relative to the occurrence of harmful algal bloom in the sediment columns. (author)
International Nuclear Information System (INIS)
Dhawan, V.; Moeller, J.R.; Strother, S.C.; Evans, A.C.; Rottenberg, D.A.
1989-01-01
Several publications have discussed the estimation and physiologic significance of regional [ 18 F]fluorodeoxyglucose (FDG) rate constants and metabolic rates. Most of these studies analyzed dynamic data collected over 45-60 min; three rate constants (k1-k3) and blood volume (Vb) were estimated and the regional cerebral metabolic rate for glucose (rCMRGlu) was subsequently derived using the measured blood glucose value and a regionally invariant value of the lumped constant (LC). The dephosphorylation rate constant (k4) was either neglected, or a fixed value was used in the estimation procedure to obtain the remaining parameters. To compare the rate constants obtained by different authors using different values of k4 is impossible without knowledge of the effect of selecting different fixed values of k4 (including zero) on the estimated rate constants and rCMRGlu. Based on our analysis of FDG/PET data from nine normal volunteer subjects, we conclude that inclusion of a fixed value for k4, in spite of a scaling effect on the absolute values of model parameters, has no effect on the coefficient of variation (CV) of within- and between-subject parameter estimates and glucose metabolic rates
Estimation of Leak Flow Rate during Post-LOCA Using Cascaded Fuzzy Neural Networks
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Yeong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun University, Gwangju (Korea, Republic of)
2016-10-15
In this study, important parameters such as the break position, size, and leak flow rate of loss of coolant accidents (LOCAs), provide operators with essential information for recovering the cooling capability of the nuclear reactor core, for preventing the reactor core from melting down, and for managing severe accidents effectively. Leak flow rate should consist of break size, differential pressure, temperature, and so on (where differential pressure means difference between internal and external reactor vessel pressure). The leak flow rate is strongly dependent on the break size and the differential pressure, but the break size is not measured and the integrity of pressure sensors is not assured in severe circumstances. In this paper, a cascaded fuzzy neural network (CFNN) model is appropriately proposed to estimate the leak flow rate out of break, which has a direct impact on the important times (time approaching the core exit temperature that exceeds 1200 .deg. F, core uncover time, reactor vessel failure time, etc.). The CFNN is a data-based model, it requires data to develop and verify itself. Because few actual severe accident data exist, it is essential to obtain the data required in the proposed model using numerical simulations. In this study, a CFNN model was developed to predict the leak flow rate before proceeding to severe LOCAs. The simulations showed that the developed CFNN model accurately predicted the leak flow rate with less error than 0.5%. The CFNN model is much better than FNN model under the same conditions, such as the same fuzzy rules. At the result of comparison, the RMS errors of the CFNN model were reduced by approximately 82 ~ 97% of those of the FNN model.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.
Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Directory of Open Access Journals (Sweden)
Jie-Hao Chen
2017-01-01
Full Text Available In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR model by 5.57% and Support Vector Regression (SVR model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363
International Nuclear Information System (INIS)
Killough, G.G.; Rope, S.K.; Shleien, B.; Voilleque, P.G.
1999-01-01
A dynamic mass-balance model has been calibrated by a nonlinear parameter estimation method, using time-series measurements of uranium in surface soil near the former Feed Materials Production Center (FMPC) near Fernald, Ohio, USA. The time-series data, taken at six locations near the site boundary since 1971, show a statistically significant downtrend of above-background uranium concentration in surface soil for all six locations. The dynamic model is based on first-order kinetics in a surface-soil compartment 10 cm in depth. Median estimates of weathering rate coefficients for insoluble uranium in this soil compartment range from about 0.065-0.14 year -1 , corresponding to mean transit times of about 7-15 years, depending on the location sampled. The model, calibrated by methods similar to those discussed in this paper, has been used to simulate surface soil kinetics of uranium for a dose reconstruction study. It was also applied, along with other data, to make confirmatory estimates of airborne releases of uranium from the FMPC between 1951 and 1988. Two soil-column models (one diffusive and one advective, the latter similar to a catenary first-order kinetic box model) were calibrated to profile data taken at one of the six locations in 1976. The temporal predictions of the advective model approximate the trend of the time series data for that location. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan
2017-07-01
Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.
Cardiel, N.; Elbaz, D.; Schiavon, R. P.; Willmer, C. N. A.; Koo, D. C.; Phillips, A. C.; Gallego, J.
2003-02-01
We use a sample of seven starburst galaxies at intermediate redshifts (z~0.4 and 0.8) with observations ranging from the observed ultraviolet to 1.4 GHz, to compare the star formation rate (SFR) estimators that are used in the different wavelength regimes. We find that extinction-corrected Hα underestimates the SFR, and the degree of this underestimation increases with the infrared luminosity of the galaxies. Galaxies with very different levels of dust extinction as measured with SFRIR/SFR(Hα, uncorrected for extinction) present a similar attenuation A[Hα], as if the Balmer lines probed a different region of the galaxy than the one responsible for the bulk of the IR luminosity for large SFRs. In addition, SFR estimates derived from [O II] λ3727 match very well those inferred from Hα after applying the metallicity correction derived from local galaxies. SFRs estimated from the UV luminosities show a dichotomic behavior, similar to that previously reported by other authors in galaxies at zfinancial support of the W. M. Keck Foundation. Based in part on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. Based in part on observations with the Infrared Space Observatory (ISO), an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, Netherlands, and United Kingdom) with the participation of ISAS and NASA.
Directory of Open Access Journals (Sweden)
Andrew Lunn
2016-07-01
Full Text Available Creatinine, although widely used as a biomarker to measure renal function, has long been known as an insensitive marker of renal impairment. Patients with reduced renal function can have a creatinine level within the normal range, with a rapid rise when renal function is significantly reduced. As of 1976, the correlation between height, the reciprocal of creatinine, and measured glomerular filtration rate (GFR in children has been described. It has been used to derive a simple formula for estimated glomerular filtration rate (eGFR that could be used at the bedside as a more sensitive method of identifying children with renal impairment. Formulae based on this association, with modifications over time as creatinine assay methods have changed, are still widely used clinically at the bedside and in research studies to assess the degree of renal impairment in children. Adult practice has moved in many countries to computer-generated results that report eGFR alongside creatinine results using more complex, but potentially more accurate estimates of GFR, which are independent of height. This permits early identification of patients with chronic kidney disease. This review assesses the feasibility of automated reporting of eGFR and the advantages and disadvantages of this in children.
Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework
Energy Technology Data Exchange (ETDEWEB)
Gong, R [Georgia Institute of Technology; Lu, C [Georgia Institute of Technology; Luo, Jian [Georgia Institute of Technology; Wu, Wei-min [Stanford University; Cheng, H. [Stanford University; Criddle, Craig [Stanford University; Kitanidis, Peter K. [Stanford University; Gu, Baohua [ORNL; Watson, David B [ORNL; Jardine, Philip M [ORNL; Brooks, Scott C [ORNL
2011-03-01
A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.
Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates
Directory of Open Access Journals (Sweden)
Zhenyu Mei
2014-10-01
Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.
Gamma exposure rate estimation in irradiation facilities of nuclear research reactors
International Nuclear Information System (INIS)
Daoud, Adrian
2009-01-01
There are experimental situations in the nuclear field, in which dose estimations due to energy-dependent radiation fields are required. Nuclear research reactors provide such fields under normal operation or due to radioactive disintegration of fission products and structural materials activation. In such situations, it is necessary to know the exposure rate of gamma radiation the different materials under experimentation are subject to. Detectors of delayed reading are usually used for this purpose. Direct evaluation methods using portable monitors are not always possible, because in some facilities the entrance with such devices is often impracticable and also unsafe. Besides, these devices only provide information of the place where the measurement was performed, but not of temporal and spatial fluctuations the radiation fields could have. In this work a direct evaluation method was developed for the 'in-situ' gamma exposure rate for the irradiation facilities of the RA-1 reactor. This method is also applicable in any similar installation, and may be complemented by delayed evaluations without problem. On the other hand, it is well known that the residual effect of radiation modifies some properties of the organic materials used in reactors, such as density, colour, viscosity, oxidation level, among others. In such cases, a correct dosimetric evaluation enables in service estimation of material duration with preserved properties. This evaluation is for instance useful when applied to lubricating oils for the primary circuit pumps in nuclear power plants, thus minimizing waste generation. In this work the necessary elements required to estimate in-situ time and space integrated dose are also established for a gamma irradiated sample in an irradiation channel of a nuclear facility with zero neutron flux. (author)
Estimation of illitization rate of smectite from the thermal history of Murakami deposit, Japan
International Nuclear Information System (INIS)
Kamei, G.; Arai, T.; Yusa, Y.; Sasaki, N.; Sakuramoto, Y.
1990-01-01
The research on illitization of smectite in the natural environment affords information on the long-term durability of bentonite which is the candidate for buffer material for high-level radioactive waste disposal facilities. Murakami bentonite deposit in central Japan, where the bentonite and rhyolitic intrusive rock were distributed, was surveyed and the lateral variation of smectite to illite in the aureole of the rhyolite was studied. The radiometric ages of some minerals from the intrusive rock and the clay deposit were determined. Comparison of the mineral ages with closure temperature estimated for the various isotopic systems allowed the thermal history of the area. The age of the intrusion was 7.1 ± 0.5 Ma, and the cooling rate of the intrusive rock was estimated to be approximately 45C/Ma. Sedimentation ages of the clay bed were mostly within the range from 18 to 14 Ma. However, the fission-track age of zircon in the clay containing illite/smectite mixed layers was 6.4 ± 0.4 Ma, which was close to that of the intrusion. The latter value could be explained as the result of annealing of fission-tracks in zircon. The presence of annealing phenomena and the estimated cooling rate concluded that illitization had occurred in the period of 3.4 Ma at least under the temperature range from above 240 ± 50 to 105C. Illite-smectite mixed layers occurred from smectite in the process. The proportion of illite was about 40%. Approximately, 29 kcal/mol as a value of activation energy was calculated to the illitization
A practical way to estimate retail tobacco sales violation rates more accurately.
Levinson, Arnold H; Patnaik, Jennifer L
2013-11-01
U.S. states annually estimate retailer propensity to sell adolescents cigarettes, which is a violation of law, by staging a single purchase attempt among a random sample of tobacco businesses. The accuracy of single-visit estimates is unknown. We examined this question using a novel test-retest protocol. Supervised minors attempted to purchase cigarettes at all retail tobacco businesses located in 3 Colorado counties. The attempts observed federal standards: Minors were aged 15-16 years, were nonsmokers, and were free of visible tattoos and piercings, and were allowed to enter stores alone or in pairs to purchase a small item while asking for cigarettes and to show or not show genuine identification (ID, e.g., driver's license). Unlike federal standards, stores received a second purchase attempt within a few days unless minors were firmly told not to return. Separate violation rates were calculated for first visits, second visits, and either visit. Eleven minors attempted to purchase cigarettes 1,079 times from 671 retail businesses. One sixth of first visits (16.8%) resulted in a violation; the rate was similar for second visits (15.7%). Considering either visit, 25.3% of businesses failed the test. Factors predictive of violation were whether clerks asked for ID, whether the clerks closely examined IDs, and whether minors included snacks or soft drinks in cigarette purchase attempts. A test-retest protocol for estimating underage cigarette sales detected half again as many businesses in violation as the federally approved one-test protocol. Federal policy makers should consider using the test-retest protocol to increase accuracy and awareness of widespread adolescent access to cigarettes through retail businesses.
Estimates of ground-water recharge rates for two small basins in central Nevada
International Nuclear Information System (INIS)
Lichty, R.W.; McKinley, P.W.
1995-01-01
Estimates of ground-water recharge rates developed from hydrologic modeling studies are presented for 3-Springs and East Stewart basins, two small basins (analog sites) located in central Nevada. The analog-site studies were conducted to aid in the estimation of recharge to the paleohydrologic regime associated with ground water in the vicinity of Yucca Mountain under wetter climatic conditions. The two analog sites are located to the north and at higher elevations than Yucca Mountain, and the prevailing (current) climatic conditions at these sites is thought to be representative of the possible range of paleoclimatic conditions in the general area of Yucca Mountain during the Quaternary. Two independent modeling approaches were conducted at each of the analog sites using observed hydrologic data on precipitation, temperature, solar radiation, stream discharge, and chloride-ion water chemistry for a 6-year study period (October 1986 through September 1992). Both models quantify the hydrologic water-balance equation and yield estimates of ground-water recharge, given appropriate input data. Results of the modeling approaches support the conclusion that reasonable estimates of average-annual recharge to ground water range from about 1 to 3 centimeters per year for 3-Springs basin (the drier site), and from about 30 to 32 centimeters per year for East Stewart basin (the wetter site). The most reliable results are those derived from a reduced form of the chloride-ion model because they reflect integrated, basinwide processes in terms of only three measured variables: precipitation amount, precipitation chemistry, and streamflow chemistry
Efficacy of using data from angler-caught Burbot to estimate population rate functions
Brauer, Tucker A.; Rhea, Darren T.; Walrath, John D.; Quist, Michael C.
2018-01-01
The effective management of a fish population depends on the collection of accurate demographic data from that population. Since demographic data are often expensive and difficult to obtain, developing cost‐effective and efficient collection methods is a high priority. This research evaluates the efficacy of using angler‐supplied data to monitor a nonnative population of Burbot Lota lota. Age and growth estimates were compared between Burbot collected by anglers and those collected in trammel nets from two Wyoming reservoirs. Collection methods produced different length‐frequency distributions, but no difference was observed in age‐frequency distributions. Mean back‐calculated lengths at age revealed that netted Burbot grew faster than angled Burbot in Fontenelle Reservoir. In contrast, angled Burbot grew slightly faster than netted Burbot in Flaming Gorge Reservoir. Von Bertalanffy growth models differed between collection methods, but differences in parameter estimates were minor. Estimates of total annual mortality (A) of Burbot in Fontenelle Reservoir were comparable between angled (A = 35.4%) and netted fish (33.9%); similar results were observed in Flaming Gorge Reservoir for angled (29.3%) and netted fish (30.5%). Beverton–Holt yield‐per‐recruit models were fit using data from both collection methods. Estimated yield differed by less than 15% between data sources and reservoir. Spawning potential ratios indicated that an exploitation rate of 20% would be required to induce recruitment overfishing in either reservoir, regardless of data source. Results of this study suggest that angler‐supplied data are useful for monitoring Burbot population dynamics in Wyoming and may be an option to efficiently monitor other fish populations in North America.
Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters.
Directory of Open Access Journals (Sweden)
Kwang-Ming Liu
Full Text Available The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ' and the vital parameters, maximum age (Tmax, age at maturity (Tm, annual fecundity (f/Rc, size at birth (Lb, size at maturity (Lm, and asymptotic length (L∞. Group (1 included species with slow growth rates (0.034 yr(-1 < k < 0.103 yr(-1 and extended longevity (26 yr < Tmax < 81 yr, e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2 included species with fast growth rates (0.103 yr(-1 < k < 0.358 yr(-1 and short longevity (9 yr < Tmax < 26 yr, e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3 included late maturing species (Lm/L∞ ≧ 0.75 with moderate longevity (Tmax < 29 yr, e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ' values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures.
Directory of Open Access Journals (Sweden)
Marcelo Henrique de Carvalho
2006-05-01
Full Text Available The objective of this work was to evaluate some aspects of the populational ecology of Chrysomya megacephala, analyzing demographic aspects of adults kept under experimental conditions. Cages of C. megacephala adults were prepared with four different larval densities (100, 200, 400 and 800. For each cage, two tables were made: one with demographic parameters for the life expectancy estimate at the initial age (e0, and another with the reproductive rate and average reproduction age estimates. Populational parameters such as the intrinsic growth rate (r and the finite growth rate (lambda were calculated as well.Chrysomya megacephala (Fabricius (Diptera, Calliphoridae é uma espécie de mosca-varejeira de considerável importância médico-sanitária que foi introduzida acidentalmente no Brasil nos anos 70. O objetivo do presente trabalho foi avaliar alguns aspectos da ecologia populacional desta espécie, analisando aspectos demográficos de adultos mantidos sob condições experimentais. Gaiolas de C. megacephala foram montadas com quatro diferentes densidades larvais (100, 200, 400 e 800. Para cada gaiola, foram confeccionadas duas tabelas: uma com parâmetros demográficos para a estimativa da expectativa de vida na idade inicial (e0, e outra com as estimativas de taxa reprodutiva e idade média de reprodução. Parâmetros populacionais tais como a taxa intrínseca de crescimento (r e a taxa finita de crescimento (lambda foram também calculados.
Ion response to relativistic electron bunches in the blowout regime of laser-plasma accelerators.
Popov, K I; Rozmus, W; Bychenkov, V Yu; Naseri, N; Capjack, C E; Brantov, A V
2010-11-05
The ion response to relativistic electron bunches in the so called bubble or blowout regime of a laser-plasma accelerator is discussed. In response to the strong fields of the accelerated electrons the ions form a central filament along the laser axis that can be compressed to densities 2 orders of magnitude higher than the initial particle density. A theory of the filament formation and a model of ion self-compression are proposed. It is also shown that in the case of a sharp rear plasma-vacuum interface the ions can be accelerated by a combination of three basic mechanisms. The long time ion evolution that results from the strong electrostatic fields of an electron bunch provides a unique diagnostic of laser-plasma accelerators.
Short-range wakefields generated in the blowout regime of plasma-wakefield acceleration
Stupakov, G.
2018-04-01
In the past, calculation of wakefields generated by an electron bunch propagating in a plasma has been carried out in linear approximation, where the plasma perturbation can be assumed small and plasma equations of motion linearized. This approximation breaks down in the blowout regime where a high-density electron driver expels plasma electrons from its path and creates a cavity void of electrons in its wake. In this paper, we develop a technique that allows us to calculate short-range longitudinal and transverse wakes generated by a witness bunch being accelerated inside the cavity. Our results can be used for studies of the beam loading and the hosing instability of the witness bunch in plasma-wakefield and laser-wakefield acceleration.
Longitudinal phase space characterization of the blow-out regime of rf photoinjector operation
Directory of Open Access Journals (Sweden)
J. T. Moody
2009-07-01
Full Text Available Using an experimental scheme based on a vertically deflecting rf deflector and a horizontally dispersing dipole, we characterize the longitudinal phase space of the beam in the blow-out regime at the UCLA Pegasus rf photoinjector. Because of the achievement of unprecedented resolution both in time (50 fs and energy (1.0 keV, we are able to demonstrate some important properties of the beams created in this regime such as extremely low longitudinal emittance, large temporal energy chirp, and the degrading effects of the cathode image charge in the longitudinal phase space which eventually leads to poorer beam quality. All of these results have been found in good agreement with simulations.
Using Pulse Rate in Estimating Workload Evaluating a Load Mobilizing Activity
Directory of Open Access Journals (Sweden)
Juan Alberto Castillo
2014-06-01
Full Text Available Introduction: The pulse rate is a direct indicator of the state of the cardiovascular system, in ad-dition to being an indirect indicator of the energy expended in performing a task. The pulse of a person is the number of pulses recorded in a peripheral artery per unit time; the pulse appears as a pressure wave moving along the blood vessels, which are flexible, “in large arterial branches, speed of 7-10 m/s in the small arteries, 15 to 35 m/s”. Materials and methods: The aim of this study was to assess heart rate, using the technique of recording the frequency of the pulse, oxy-gen consumption and observation of work activity in the estimation of the workload in a load handling task for three situations: lift/transfer/deposit; before, during and after the task the pulse rate is recorded for 24 young volunteers (10 women and 14 men under laboratory conditions. We performed a gesture analysis of work activity and lifting and handling strategies. Results: We observed an increase between initial and final fp in both groups and for the two tasks, a dif¬ference is also recorded in the increase in heart rate of 17.5 for charging 75 % of the participants experienced an increase in fp above 100 lat./min. Par 25 kg, registered values indicate greater than 114 lat./min and 17.5 kg than 128 lat./min values. Discussion: The pulse rate method is recommended for its simplicity of use for operational staff, supervisors and managers and indus¬trial engineers not trained in the physiology method can also be used by industrial hygienists.
Diagnosis of blow-out fracture of the orbital floor using a computed tomography
International Nuclear Information System (INIS)
Ishikawa, Yoshimi; Aoki, Shinjiroh; Ono, Shigeru; Fujita, Kiyohide
1986-01-01
Diagnosis of a blow-out fracture of the orbital floor is relatively easy from clinical and roentogen findings but information from these findings are not adequate for the decision for surgical indication. Until recently, computed tomography for the orbital region was generally performed only in axial plane. But the axial plane is parallel to the orbital structure, therefore it is difficult to visualize the orbital floor and inferior rectus muscle. On the other hand, coronal plane is cross-section to the orbital structure, and we can visualize the all orbital soft tissues including extraocular muscles. We examined 3 cases diagnosed as a blow-out fracture of the orbital floor by conventional roentogen films and clinical findings. After axial CT was performed firstly, the coronal CT was scanned with 60 degrees to 70 degrees from OM line, setting the patient in the hanging head position. Case 1 used coronal reformation of the axial CT data. Case 2 and case 3 used direct coronal scanning. By bi-plane CT we could diagnose whether inferior rectus muscle was entrapped or not, and confirm surgical indication. Using of a cadaver, we studied the mechanism for limitation of eye movement. These studies revealed that it is rare that the inferior rectus muscle is entrapped directly by fractured segments at the orbital anterior floor. Because there is a lot of orbital fat-pad between the inferior rectus muscle and the orbital floor, and it may play an important role. The limited mobility of the inferior rectus muscle may occur as a result of increased tension of the fibrous bands that attach to the muscle sheath from the prolapsed fat-pad, and contructure by scar formed secondarily. Diagnosis of the location of a fracture in the orbital floor and cause of the limitation of eye movement must be done as early as possible and from this information we can confirm surgical indication. (J.P.N.)
DeepBlow - a Lagrangian plume model for deep water blowouts
International Nuclear Information System (INIS)
Johansen, Oeistein
2000-01-01
This paper presents a sub-sea blowout model designed with special emphasis on deep-water conditions. The model is an integral plume model based on a Lagrangian concept. This concept is applied to multiphase discharges in the formation of water, oil and gas in a stratified water column with variable currents. The gas may be converted to hydrate in combination with seawater, dissolved into the plume water, or leaking out of the plume due to the slip between rising gas bubbles and the plume trajectory. Non-ideal behaviour of the gas is accounted for by the introduction of pressure- and temperature-dependent compressibility z-factor in the equation of state. A number of case studies are presented in the paper. One of the cases (blowout from 100 m depth) is compared with observations from a field experiment conducted in Norwegian waters in June 1996. The model results are found to compare favourably with the field observations when dissolution of gas into seawater is accounted in the model. For discharges at intermediate to shallow depths (100-250 m), the two major processes limiting plume rise will be: (a) dissolution of gas into ambient water, or (b) bubbles rising out of the inclined plume. These processes tend to be self-enforcing, i.e., when a gas is lost by either of these processes, plume rise tends to slow down and more time will be available for dissolution. For discharges in deep waters (700-1500 m depth), hydrate formation is found to be a dominating process in limiting plume rise. (Author)
Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)
Silvestri, Giulia
2010-05-01
Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point
Zhou, Xinyu
2016-03-15
A Gaussian multiple-input single-output (MISO) fading channel is considered. We assume that the transmitter, in addition to the statistics of all channel gains, is aware instantaneously of a noisy version of the channel to the legitimate receiver. On the other hand, the legitimate receiver is aware instantaneously of its channel to the transmitter, whereas the eavesdropper instantaneously knows all channel gains. We evaluate an achievable rate using a Gaussian input without indexing an auxiliary random variable. A sufficient condition for beamforming to be optimal is provided. When the number of transmit antennas is large, beamforming also turns out to be optimal. In this case, the maximum achievable rate can be expressed in a simple closed form and scales with the logarithm of the number of transmit antennas. Furthermore, in the case when a noisy estimate of the eavesdropper’s channel is also available at the transmitter, we introduce the SNR difference and the SNR ratio criterions and derive the related optimal transmission strategies and the corresponding achievable rates.
Zhou, Xinyu; Rezki, Zouheir; Alomair, Basel; Alouini, Mohamed-Slim
2016-01-01
A Gaussian multiple-input single-output (MISO) fading channel is considered. We assume that the transmitter, in addition to the statistics of all channel gains, is aware instantaneously of a noisy version of the channel to the legitimate receiver. On the other hand, the legitimate receiver is aware instantaneously of its channel to the transmitter, whereas the eavesdropper instantaneously knows all channel gains. We evaluate an achievable rate using a Gaussian input without indexing an auxiliary random variable. A sufficient condition for beamforming to be optimal is provided. When the number of transmit antennas is large, beamforming also turns out to be optimal. In this case, the maximum achievable rate can be expressed in a simple closed form and scales with the logarithm of the number of transmit antennas. Furthermore, in the case when a noisy estimate of the eavesdropper’s channel is also available at the transmitter, we introduce the SNR difference and the SNR ratio criterions and derive the related optimal transmission strategies and the corresponding achievable rates.
Estimating the basic reproduction rate of HFMD using the time series SIR model in Guangdong, China.
Directory of Open Access Journals (Sweden)
Zhicheng Du
Full Text Available Hand, foot, and mouth disease (HFMD has caused a substantial burden of disease in China, especially in Guangdong Province. Based on notifiable cases, we use the time series Susceptible-Infected-Recovered model to estimate the basic reproduction rate (R0 and the herd immunity threshold, understanding the transmission and persistence of HFMD more completely for efficient intervention in this province. The standardized difference between the reported and fitted time series of HFMD was 0.009 (<0.2. The median basic reproduction rate of total, enterovirus 71, and coxsackievirus 16 cases in Guangdong were 4.621 (IQR: 3.907-5.823, 3.023 (IQR: 2.289-4.292 and 7.767 (IQR: 6.903-10.353, respectively. The heatmap of R0 showed semiannual peaks of activity, including a major peak in spring and early summer (about the 12th week followed by a smaller peak in autumn (about the 36th week. The county-level model showed that Longchuan (R0 = 33, Gaozhou (R0 = 24, Huazhou (R0 = 23 and Qingxin (R0 = 19 counties have higher basic reproduction rate than other counties in the province. The epidemic of HFMD in Guangdong Province is still grim, and strategies like the World Health Organization's expanded program on immunization need to be implemented. An elimination of HFMD in Guangdong might need a Herd Immunity Threshold of 78%.
Uncertainty estimation with bias-correction for flow series based on rating curve
Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta
2014-03-01
Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.
Directory of Open Access Journals (Sweden)
T.M. Elshiekh
2015-12-01
Full Text Available The presence of hydrogen sulfide in the hydrocarbon fluids is a well known problem in many oil and gas fields. Hydrogen sulfide is an undesirable contaminant which presents many environmental and safety hazards. It is corrosive, malodorous, and toxic. Accordingly, a need has been long left in the industry to develop a process which can successfully remove hydrogen sulfide from the hydrocarbons or at least reduce its level during the production, storage or processing to a level that satisfies safety and product specification requirements. The common method used to remove or reduce the concentration of hydrogen sulfide in the hydrocarbon production fluids is to inject the hydrogen sulfide scavenger into the hydrocarbon stream. One of the chemicals produced by the Egyptian Petroleum Research Institute (EPRI is EPRI H2S scavenger. It is used in some of the Egyptian petroleum producing companies. The injection dose rate of H2S scavenger is usually determined by experimental lab tests and field trials. In this work, this injection dose rate is mathematically estimated by modeling and simulation of an oil producing field belonging to Petrobel Company in Egypt which uses EPRI H2S scavenger. Comparison between the calculated and practical values of injection dose rate emphasizes the real ability of the proposed equation.
Wagner, Brian J.; Harvey, Judson W.
1997-01-01
Tracer experiments are valuable tools for analyzing the transport characteristics of streams and their interactions with shallow groundwater. The focus of this work is the design of tracer studies in high-gradient stream systems subject to advection, dispersion, groundwater inflow, and exchange between the active channel and zones in surface or subsurface water where flow is stagnant or slow moving. We present a methodology for (1) evaluating and comparing alternative stream tracer experiment designs and (2) identifying those combinations of stream transport properties that pose limitations to parameter estimation and therefore a challenge to tracer test design. The methodology uses the concept of global parameter uncertainty analysis, which couples solute transport simulation with parameter uncertainty analysis in a Monte Carlo framework. Two general conclusions resulted from this work. First, the solute injection and sampling strategy has an important effect on the reliability of transport parameter estimates. We found that constant injection with sampling through concentration rise, plateau, and fall provided considerably more reliable parameter estimates than a pulse injection across the spectrum of transport scenarios likely encountered in high-gradient streams. Second, for a given tracer test design, the uncertainties in mass transfer and storage-zone parameter estimates are strongly dependent on the experimental Damkohler number, DaI, which is a dimensionless combination of the rates of exchange between the stream and storage zones, the stream-water velocity, and the stream reach length of the experiment. Parameter uncertainties are lowest at DaI values on the order of 1.0. When DaI values are much less than 1.0 (owing to high velocity, long exchange timescale, and/or short reach length), parameter uncertainties are high because only a small amount of tracer interacts with storage zones in the reach. For the opposite conditions (DaI ≫ 1.0), solute
Chen, Te; Xu, Xing; Chen, Long; Jiang, Haobing; Cai, Yingfeng; Li, Yong
2018-02-01
Accurate estimation of longitudinal force, lateral vehicle speed and yaw rate is of great significance to torque allocation and stability control for four-wheel independent driven electric vehicle (4WID-EVs). A fusion method is proposed to estimate the longitudinal force, lateral vehicle speed and yaw rate for 4WID-EVs. The electric driving wheel model (EDWM) is introduced into the longitudinal force estimation, the longitudinal force observer (LFO) is designed firstly based on the adaptive high-order sliding mode observer (HSMO), and the convergence of LFO is analyzed and proved. Based on the estimated longitudinal force, an estimation strategy is then presented in which the strong tracking filter (STF) is used to estimate lateral vehicle speed and yaw rate simultaneously. Finally, co-simulation via Carsim and Matlab/Simulink is carried out to demonstrate the effectiveness of the proposed method. The performance of LFO in practice is verified by the experiment on chassis dynamometer bench.
Estimating Collisionally-Induced Escape Rates of Light Neutrals from Early Mars
Gacesa, M.; Zahnle, K. J.
2016-12-01
Collisions of atmospheric gases with hot oxygen atoms constitute an important non-thermal mechanism of escape of light atomic and molecular species at Mars. In this study, we present revised theoretical estimates of non-thermal escape rates of neutral O, H, He, and H2 based on recent atmospheric density profiles obtained from the NASA Mars Atmosphere and Volatile Evolution (MAVEN) mission and related theoretical models. As primary sources of hot oxygen, we consider dissociative recombination of O2+ and CO2+ molecular ions. We also consider hot oxygen atoms energized in primary and secondary collisions with energetic neutral atoms (ENAs) produced in charge-exchange of solar wind H+ and He+ ions with atmospheric gases1,2. Scattering of hot oxygen and atmospheric species of interest is modeled using fully-quantum reactive scattering formalism3. This approach allows us to construct distributions of vibrationally and rotationally excited states and predict the products' emission spectra. In addition, we estimate formation rates of excited, translationally hot hydroxyl molecules in the upper atmosphere of Mars. The escape rates are calculated from the kinetic energy distributions of the reaction products using an enhanced 1D model of the atmosphere for a range of orbital and solar parameters. Finally, by considering different scenarios, we estimate the influence of these escape mechanisms on the evolution of Mars's atmosphere throughout previous epochs and their impact on the atmospheric D/H ratio. M.G.'s research was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Universities Space Research Association under contract with NASA. 1N. Lewkow and V. Kharchenko, "Precipitation of Energetic Neutral Atoms and Escape Fluxes induced from the Mars Atmosphere", Astroph. J., 790, 98 (2014) 2M. Gacesa, N. Lewkow, and V. Kharchenko, "Non-thermal production and escape of OH from the upper atmosphere of Mars", arXiv:1607
Ceilometer-based Rainfall Rate estimates in the framework of VORTEX-SE campaign: A discussion
Barragan, Ruben; Rocadenbosch, Francesc; Waldinger, Joseph; Frasier, Stephen; Turner, Dave; Dawson, Daniel; Tanamachi, Robin
2017-04-01
During Spring 2016 the first season of the Verification of the Origins of Rotation in Tornadoes EXperiment-Southeast (VORTEX-SE) was conducted in the Huntsville, AL environs. Foci of VORTEX-SE include the characterization of the tornadic environments specific to the Southeast US as well as societal response to forecasts and warnings. Among several experiments, a research team from Purdue University and from the University of Massachusetts Amherst deployed a mobile S-band Frequency-Modulated Continuous-Wave (FMCW) radar and a co-located Vaisala CL31 ceilometer for a period of eight weeks near Belle Mina, AL. Portable disdrometers (DSDs) were also deployed in the same area by Purdue University, occasionally co-located with the radar and lidar. The NOAA National Severe Storms Laboratory also deployed the Collaborative Lower Atmosphere Mobile Profiling System (CLAMPS) consisting of a Doppler lidar, a microwave radiometer, and an infrared spectrometer. The purpose of these profiling instruments was to characterize the atmospheric boundary layer evolution over the course of the experiment. In this paper we focus on the lidar-based retrieval of rainfall rate (RR) and its limitations using observations from intensive observation periods during the experiment: 31 March and 29 April 2016. Departing from Lewandowski et al., 2009, the RR was estimated by the Vaisala CL31 ceilometer applying the slope method (Kunz and Leeuw, 1993) to invert the extinction caused by the rain. Extinction retrievals are fitted against RR estimates from the disdrometer in order to derive a correlation model that allows us to estimate the RR from the ceilometer in similar situations without a disdrometer permanently deployed. The problem of extinction retrieval is also studied from the perspective of Klett-Fernald-Sasano's (KFS) lidar inversion algorithm (Klett, 1981; 1985), which requires the assumption of an aerosol extinction-to-backscatter ratio (the so-called lidar ratio) and calibration in a
Estimates of ground-water recharge rates for two small basins in central Nevada
Lichty, R.W.; McKinley, P.W.
1995-01-01
Estimates of ground-water recharge rates developed from hydrologic modeling studies are presented for 3-Springs and East Stewart basins. two small basins (analog sites) located in central Nevada. The analog-site studies were conducted to aid in the estimation of recharge to the paleohydrologic regime associated with ground water in the vicinity of Yucca Mountain under wetter climatic conditions. The two analog sites are located to the north and at higher elevations than Yucca Mountain, and the prevailing (current) climatic conditions at these sites is thought to be representative of the possible range of paleoclimatic conditions in the general area of Yucca Mountain during the Quaternary. Two independent modeling approaches were conducted at each of the analog sites using observed hydrologic data on precipitation, temperature, solar radiation stream discharge, and chloride-ion water chemistry for a 6-year study period (October 1986 through September 1992). Both models quantify the hydrologic water-balance equation and yield estimates of ground-water recharge, given appropriate input data. The first model uses a traditional approach to quantify watershed hydrology through a precipitation-runoff modeling system that accounts for the spatial variability of hydrologic inputs, processes, and responses (outputs) using a dailycomputational time step. The second model is based on the conservative nature of the dissolved chloride ion in selected hydrologic environments, and its use as a natural tracer allows the computation of acoupled, water and chloride-ion, mass-balance system of equations to estimate available water (sum ofsurface runoff and groundwater recharge). Results of the modeling approaches support the conclusion that reasonable estimates of average-annual recharge to ground water range from about 1 to 3 centimeters per year for 3-Springs basin (the drier site), and from about 30 to 32 centimeters per year for East Stewart basin (the wetter site). The most
Directory of Open Access Journals (Sweden)
J. D. Nichols
2006-03-01
Full Text Available We make the first quantitative estimates of the magnetopause reconnection rate at Jupiter using extended in situ data sets, building on simple order of magnitude estimates made some thirty years ago by Brice and Ionannidis (1970 and Kennel and Coroniti (1975, 1977. The jovian low-latitude magnetopause (open flux production reconnection voltage is estimated using the Jackman et al. (2004 algorithm, validated at Earth, previously applied to Saturn, and here adapted to Jupiter. The high-latitude (lobe magnetopause reconnection voltage is similarly calculated using the related Gérard et al. (2005 algorithm, also previously used for Saturn. We employ data from the Ulysses spacecraft obtained during periods when it was located near 5AU and within 5° of the ecliptic plane (January to June 1992, January to August 1998, and April to October 2004, along with data from the Cassini spacecraft obtained during the Jupiter flyby in 2000/2001. We include the effect of magnetospheric compression through dynamic pressure modulation, and also examine the effect of variations in the direction of Jupiter's magnetic axis throughout the jovian day and year. The intervals of data considered represent different phases in the solar cycle, such that we are also able to examine solar cycle dependency. The overall average low-latitude reconnection voltage is estimated to be ~230 kV, such that the average amount of open flux created over one solar rotation is ~500 GWb. We thus estimate the average time to replenish Jupiter's magnetotail, which contains ~300-500 GWb of open flux, to be ~15-25 days, corresponding to a tail length of ~3.8-6.5 AU. The average high-latitude reconnection voltage is estimated to be ~130 kV, associated with lobe "stirring". Within these averages, however, the estimated voltages undergo considerable variation. Generally, the low-latitude reconnection voltage exhibits a "background" of ~100 kV that is punctuated by one or two significant
Directory of Open Access Journals (Sweden)
Liu X
2012-10-01
Full Text Available Xun Liu,1,2,* Mu-hua Cheng,3,* Cheng-gang Shi,1 Cheng Wang,1 Cai-lian Cheng,1 Jin-xia Chen,1 Hua Tang,1 Zhu-jiang Chen,1 Zeng-chun Ye,1 Tan-qi Lou11Division of Nephrology, Department of Internal Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China; 2College of Biology Engineering, South China University of Technology, Guangzhou, China; 3Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China *These authors contributed equally to this paperBackground: Chronic kidney disease (CKD is recognized worldwide as a public health problem, and its prevalence increases as the population ages. However, the applicability of formulas for estimating the glomerular filtration rate (GFR based on serum creatinine (SC levels in elderly Chinese patients with CKD is limited.Materials and methods: Based on values obtained with the technetium-99m diethylenetriaminepentaacetic acid (99mTc-DTPA renal dynamic imaging method, 319 elderly Chinese patients with CKD were enrolled in this study. Serum creatinine was determined by the enzymatic method. The GFR was estimated using the Cockroft–Gault (CG equation, the Modification of Diet in Renal Disease (MDRD equations, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equation, the Jelliffe-1973 equation, and the Hull equation.Results: The median of difference ranged from −0.3–4.3 mL/min/1.73 m2. The interquartile range (IQR of differences ranged from 13.9–17.6 mL/min/1.73 m2. Accuracy with a deviation less than 15% ranged from 27.6%–32.9%. Accuracy with a deviation less than 30% ranged from 53.6%–57.7%. Accuracy with a deviation less than 50% ranged from 74.9%–81.5%. None of the equations had accuracy up to the 70% level with a deviation less than 30% from the standard glomerular filtration rate (sGFR. Bland–Altman analysis demonstrated that the mean difference ranged from −3.0–2.4 mL/min/1.73 m2. However, the
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Kim, Kyu-Myong
2004-01-01
Over the tropical land regions observations of the 85 GHz brightness temperature (T(sub 85v)) made by the TRMM Microwave Imager (TMI) radiometer when analyzed with the help of rain rate (R(sub pR)) deduced from the TRMM Precipitation Radar (PR) indicate that there are two maxima in rain rate. One strong maximum occurs when T(sub 85) has a value of about 220 K and the other weaker one when T(sub 85v) is much colder approx. 150 K. Together with the help of earlier studies based on airborne Doppler Radar observations and radiative transfer theoretical simulations, we infer the maximum near 220 K is a result of relatively weak scattering due to super cooled rain drops and water coated ice hydrometeors associated with a developing thunderstorm (Cb) that has a strong updraft. The other maximum is associated with strong scattering due to ice particles that are formed when the updraft collapses and the rain from the Cb is transit2oning from convective type to stratiform type. Incorporating these ideas and with a view to improve the estimation of rain rate from existing operational method applicable to the tropical land areas, we have developed a rain retrieval model. This model utilizes two parameters, that have a horizontal scale of approx. 20km, deduced from the TMI measurements at 19, 21 and 37 GHz (T(sub 19v), T(sub 21v), T(sub 37v). The third parameter in the model, namely the horizontal gradient of brightness temperature within the 20 km scale, is deduced from TMI measurements at 85 GHz. Utilizing these parameters our retrieval model is formulated to yield instantaneous rain rate on a scale of 20 km and seasonal average on a mesoscale that agree well with that of the PR.
Microbial uptake of radiolabeled substrates: estimates of growth rates from time course measurements
International Nuclear Information System (INIS)
Li, W.K.W.
1984-01-01
The uptake of [ 3 H]glucose and a mixture of 3 H-labeled amino acids was measured, in time course fashion, in planktonic microbial assemblages of the eastern tropical Pacific Ocean. The average generation times of those portions of the assemblages able to utilize these substrates were estimated from a simple exponential growth model. Other workers have independently used this model in its integrated or differential form. A mathematical verification and an experimental demonstration of the equivalence of the two approaches are presented. A study was made of the size distribution of heterotrophic activity, using time course measurements. It was found that the size distribution and the effect of sample filtration before radiolabeling were dependent on time of incubation. In principle, it was possible to ascribe these time dependences to differences in th specific growth rate and initial standing stock of the microbial assemblages. 33 references
Al Hassan, Mohammad; Novack, Steven D.; Hatfield, Glen S.; Britton, Paul
2017-01-01
Today's launch vehicles complex electronic and avionic systems heavily utilize the Field Programmable Gate Array (FPGA) integrated circuit (IC). FPGAs are prevalent ICs in communication protocols such as MIL-STD-1553B, and in control signal commands such as in solenoid/servo valves actuations. This paper will demonstrate guidelines to estimate FPGA failure rates for a launch vehicle, the guidelines will account for hardware, firmware, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC, FPGA memory and clock. The firmware portion will provide guidelines on the high level FPGA programming language and ways to account for software/code reliability growth. The radiation portion will provide guidelines on environment susceptibility as well as guidelines on tailoring other launch vehicle programs historical data to a specific launch vehicle.
Estimating steady-state evaporation rates from bare soils under conditions of high water table
Ripple, C.D.; Rubin, J.; Van Hylckama, T. E. A.
1970-01-01
A procedure that combines meteorological and soil equations of water transfer makes it possible to estimate approximately the steady-state evaporation from bare soils under conditions of high water table. Field data required include soil-water retention curves, water table depth and a record of air temperature, air humidity and wind velocity at one elevation. The procedure takes into account the relevant atmospheric factors and the soil's capability to conduct 'water in liquid and vapor forms. It neglects the effects of thermal transfer (except in the vapor case) and of salt accumulation. Homogeneous as well as layered soils can be treated. Results obtained with the method demonstrate how the soil evaporation rates·depend on potential evaporation, water table depth, vapor transfer and certain soil parameters.
In-Vessel Coil Material Failure Rate Estimates for ITER Design Use
Energy Technology Data Exchange (ETDEWEB)
L. C. Cadwallader
2013-01-01
The ITER international project design teams are working to produce an engineering design for construction of this large tokamak fusion experiment. One of the design issues is ensuring proper control of the fusion plasma. In-vessel magnet coils may be needed for plasma control, especially the control of edge localized modes (ELMs) and plasma vertical stabilization (VS). These coils will be lifetime components that reside inside the ITER vacuum vessel behind the blanket modules. As such, their reliability is an important design issue since access will be time consuming if any type of repair were necessary. The following chapters give the research results and estimates of failure rates for the coil conductor and jacket materials to be used for the in-vessel coils. Copper and CuCrZr conductors, and stainless steel and Inconel jackets are examined.
Conversion factors for estimating release rate of gaseous radioactivity by an aerial survey
International Nuclear Information System (INIS)
Saito, Kimiaki; Moriuchi, Shigeru
1988-02-01
Conversion factors necessary for estimating release rate of gaseous radioactivity by an aerial survey are presented. The conversion factors were determined based on calculation assuming a Gaussian plume model as a function of atmospheric stability, down-wind distance and flight height. First, the conversion factors for plumes emitting mono-energy gamma rays were calculated, then, conversion factors were constructed through convolution for the radionuclides essential in an accident of a nuclear reactor, and for mixtures of these radionuclides considering elapsed time after shutdown. These conversion factors are shown in figures, and also polynomial expressions of the conversion factors as a function of height have been decided with the least-squares method. A user can easily obtain proper conversion factors from data shown here. (author)
Directory of Open Access Journals (Sweden)
Kwasi Torpey
Full Text Available Mother-to-child transmission of HIV (MTCT remains the most prevalent source of pediatric HIV infection. Most PMTCT (prevention of mother-to-child transmission of HIV programs have concentrated monitoring and evaluation efforts on process rather than on outcome indicators. In this paper, we review service data from 28,320 children born to HIV-positive mothers to estimate MTCT rates.This study analyzed DNA PCR results and PMTCT data from perinatally exposed children zero to 12 months of age from five Zambian provinces between September 2007 and July 2010.The majority of children (58.6% had a PCR test conducted between age six weeks and six months. Exclusive breastfeeding (56.8% was the most frequent feeding method. An estimated 45.9% of mothers were below 30 years old and 93.3% had disclosed their HIV status. In terms of ARV regimen for PMTCT, 32.7% received AZT+single dose NVP (sdNVP, 30.9% received highly active antiretroviral treatment (HAART, 19.6% received sdNVP only and 12.9% received no ARVs. Transmission rates at six weeks when ARVs were received by both mother and baby, mother only, baby only, and none were 5.8%, 10.5%, 15.8% and 21.8% respectively. Transmission rates at six weeks where mother received HAART, AZT+sd NVP, sdNVP, and no intervention were 4.2%, 6.8%, 8.7% and 20.1% respectively. Based on adjusted analysis including ARV exposures and non ARV-related parameters, lower rates of positive PCR results were associated with 1 both mother and infant receiving prophylaxis, 2 children never breastfed and 3 mother being 30 years old or greater. Overall between September 2007 and July 2010, 12.2% of PCR results were HIV positive. Between September 2007 and January 2009, then between February 2009 and July 2010, proportions of positive PCR results were 15.1% and 11% respectively, a significant difference.The use of ARV drugs reduces vertical transmission of HIV in a program setting. Non-chemoprophylactic factors also play a significant
Directory of Open Access Journals (Sweden)
Trpeski Predrag
2015-01-01
Full Text Available The main goal of the paper is to estimate the NAIRU for the Macedonian economy and to discuss the applicability of this indicator. The paper provides time-varying estimates for the period 1998-2012, which are obtained using the Ball and Mankiw (2002 approach, supplemented with the iterative procedure proposed by Ball (2009. The results reveal that the Macedonian NAIRU has ahumpshaped path. The estimation is based on both the LFS unemployment rate and the LFS unemployment rate corrected for employment in the grey economy. The dynamics of the estimated NAIRU stress the ability of the NAIRU to present the cyclical misbalances in a national economy.
Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers
Directory of Open Access Journals (Sweden)
P. Maleki
2016-02-01
in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the
Improvement of force-sensor-based heart rate estimation using multichannel data fusion.
Bruser, Christoph; Kortelainen, Juha M; Winter, Stefan; Tenhunen, Mirja; Parkka, Juha; Leonhardt, Steffen
2015-01-01
The aim of this paper is to present and evaluate algorithms for heartbeat interval estimation from multiple spatially distributed force sensors integrated into a bed. Moreover, the benefit of using multichannel systems as opposed to a single sensor is investigated. While it might seem intuitive that multiple channels are superior to a single channel, the main challenge lies in finding suitable methods to actually leverage this potential. To this end, two algorithms for heart rate estimation from multichannel vibration signals are presented and compared against a single-channel sensing solution. The first method operates by analyzing the cepstrum computed from the average spectra of the individual channels, while the second method applies Bayesian fusion to three interval estimators, such as the autocorrelation, which are applied to each channel. This evaluation is based on 28 night-long sleep lab recordings during which an eight-channel polyvinylidene fluoride-based sensor array was used to acquire cardiac vibration signals. The recruited patients suffered from different sleep disorders of varying severity. From the sensor array data, a virtual single-channel signal was also derived for comparison by averaging the channels. The single-channel results achieved a beat-to-beat interval error of 2.2% with a coverage (i.e., percentage of the recording which could be analyzed) of 68.7%. In comparison, the best multichannel results attained a mean error and coverage of 1.0% and 81.0%, respectively. These results present statistically significant improvements of both metrics over the single-channel results (p < 0.05).
Estimation of methane emission rate changes using age-defined waste in a landfill site.
Ishii, Kazuei; Furuichi, Toru
2013-09-01
Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.
ESTIMATION OF FLEXIBILITY OF AN ORGANIZATION ON THE GROUND OF THE CALCULATION OF PROFIT MARGIN RATE
Directory of Open Access Journals (Sweden)
Olga Gennadevna Rybakova
2016-12-01
Full Text Available The article deals with the problem of the flexibility of an organization as the ability to adapt effectively to the external environment. The authors have identified and investigated different approaches to estimating the flexibility of an organization on the ground of flexibility grading, calculation of the general index of flexibility as well as the calculation of flexibility’s ranking score. We have identified the advantages and disadvantages of these approaches. A new method of the estimation of an organization’s flexibility on the ground of the calculation of relative profit margin has been developed. This method is the multifunctional assessment tool of enterprise’s functionability in the current context of difficult and volatile economic environment. It allows in the early stage to identify negative trends in the production and financial figures and thus, it enables the organizational leadership to take steps in advance in order to avert a crisis in its activity. Keeping the profit margin at the same rate at the forced contraction of output, because of the negative impact of external factors, will confirm that the organization has adapted to the external environment and, therefore, it is flexible. The organization can be considered with margin rate beginning to low up to zero value as an organization with an insufficient level of flexibility that is at the “zone of crisis” and it is characterized by the depletion of reserved funds and reduction of current assets. Loss-maker is nonflexible and the presence of loss means that the organization has an evident sign of crisis and it can be bankrupt.
Hodille, E. A.; Bernard, E.; Markelj, S.; Mougenot, J.; Becquart, C. S.; Bisson, R.; Grisolia, C.
2017-12-01
Based on macroscopic rate equation simulations of tritium migration in an actively cooled tungsten (W) plasma facing component (PFC) using the code MHIMS (migration of hydrogen isotopes in metals), an estimation has been made of the tritium retention in ITER W divertor target during a non-uniform exponential distribution of particle fluxes. Two grades of materials are considered to be exposed to tritium ions: an undamaged W and a damaged W exposed to fast fusion neutrons. Due to strong temperature gradient in the PFC, Soret effect’s impacts on tritium retention is also evaluated for both cases. Thanks to the simulation, the evolutions of the tritium retention and the tritium migration depth are obtained as a function of the implanted flux and the number of cycles. From these evolutions, extrapolation laws are built to estimate the number of cycles needed for tritium to permeate from the implantation zone to the cooled surface and to quantify the corresponding retention of tritium throughout the W PFC.
A method of estimating inspiratory flow rate and volume from an inhaler using acoustic measurements
International Nuclear Information System (INIS)
Holmes, Martin S; D'Arcy, Shona; O'Brien, Ultan; Reilly, Richard B; Seheult, Jansen N; Geraghty, Colm; Costello, Richard W; Crispino O'Connell, Gloria
2013-01-01
Inhalers are devices employed to deliver medication to the airways in the treatment of respiratory diseases such as asthma and chronic obstructive pulmonary disease. A dry powder inhaler (DPI) is a breath actuated inhaler that delivers medication in dry powder form. When used correctly, DPIs improve patients' clinical outcomes. However, some patients are unable to reach the peak inspiratory flow rate (PIFR) necessary to fully extract the medication. Presently clinicians have no reliable method of objectively measuring PIFR in inhalers. In this study, we propose a novel method of estimating PIFR and also the inspiratory capacity (IC) of patients' inhalations from a commonly used DPI, using acoustic measurements. With a recording device, the acoustic signal of 15 healthy subjects using a DPI over a range of varying PIFR and IC values was obtained. Temporal and spectral signal analysis revealed that the inhalation signal contains sufficient information that can be employed to estimate PIFR and IC. It was found that the average power (P ave ) in the frequency band 300–600 Hz had the strongest correlation with PIFR (R 2 = 0.9079), while the power in the same frequency band was also highly correlated with IC (R 2 = 0.9245). This study has several clinical implications as it demonstrates the feasibility of using acoustics to objectively monitor inhaler use. (paper)
Total Body Capacitance for Estimating Human Basal Metabolic Rate in an Egyptian Population
M. Abdel-Mageed, Samir; I. Mohamed, Ehab
2016-01-01
Determining basal metabolic rate (BMR) is important for estimating total energy needs in the human being yet, concerns have been raised regarding the suitability of sex-specific equations based on age and weight for its calculation on an individual or population basis. It has been shown that body cell mass (BCM) is the body compartment responsible for BMR. The objectives of this study were to investigate the relationship between total body capacitance (TBC), which is considered as an expression for BCM, and BMR and to develop a formula for calculating BMR in comparison with widely used equations. Fifty healthy nonsmoking male volunteers [mean age (± SD): 24.93 ± 4.15 year and body mass index (BMI): 25.63 ± 3.59 kg/m2] and an equal number of healthy nonsmoking females matched for age and BMI were recruited for the study. TBC and BMR were measured for all participants using octopolar bioelectric impedance analysis and indirect calorimetry techniques, respectively. A significant regressing equation based on the covariates: sex, weight, and TBC for estimating BMR was derived (R=0.96, SEE=48.59 kcal, and P<0.0001), which will be useful for nutritional and health status assessment for both individuals and populations. PMID:27127453
Multisensor data fusion for enhanced respiratory rate estimation in thermal videos.
Pereira, Carina B; Xinchi Yu; Blazek, Vladimir; Venema, Boudewijn; Leonhardt, Steffen
2016-08-01
Scientific studies have demonstrated that an atypical respiratory rate (RR) is frequently one of the earliest and major indicators of physiological distress. However, it is also described in the literature as "the neglected vital parameter", mainly due to shortcomings of clinical available monitoring techniques, which require attachment of sensors to the patient's body. The current paper introduces a novel approach that uses multisensor data fusion for an enhanced RR estimation in thermal videos. It considers not only the temperature variation around nostrils and mouth, but the upward and downward movement of both shoulders. In order to analyze the performance of our approach, two experiments were carried out on five healthy candidates. While during phase A, the subjects breathed normally, during phase B they simulated different breathing patterns. Thoracic effort was the gold standard elected to validate our algorithm. Our results show an excellent agreement between infrared thermography (IRT) and ground truth. While in phase A a mean correlation of 0.983 and a root-mean-square error of 0.240 bpm (breaths per minute) was obtained, in phase B they hovered around 0.995 and 0.890 bpm, respectively. In sum, IRT may be a promising clinical alternative to conventional sensors. Additionally, multisensor data fusion contributes to an enhancement of RR estimation and robustness.
Simple estimate of entrainment rate of pollutants from a coastal discharge into the surf zone.
Wong, Simon H C; Monismith, Stephen G; Boehm, Alexandria B
2013-10-15
Microbial pollutants from coastal discharges can increase illness risks for swimmers and cause beach advisories. There is presently no predictive model for estimating the entrainment of pollution from coastal discharges into the surf zone. We present a novel, quantitative framework for estimating surf zone entrainment of pollution at a wave-dominant open beach. Using physical arguments, we identify a dimensionless parameter equal to the quotient of the surf zone width l(sz) and the cross-flow length scale of the discharge la = M(j) (1/2)/U(sz), where M(j) is the discharge's momentum flux and U(sz) is a representative alongshore velocity in the surf zone. We conducted numerical modeling of a nonbuoyant discharge at an alongshore uniform beach with constant slope using a wave-resolving hydrodynamic model. Using results from 144 numerical experiments we develop an empirical relationship between the surf zone entrainment rate α and l(sz)/(la). The empirical relationship can reasonably explain seven measurements of surf zone entrainment at three diverse coastal discharges. This predictive relationship can be a useful tool in coastal water quality management and can be used to develop predictive beach water quality models.