Methodology for estimation of time-dependent surface heat flux due to cryogen spray cooling.
Tunnell, James W; Torres, Jorge H; Anvari, Bahman
2002-01-01
Cryogen spray cooling (CSC) is an effective technique to protect the epidermis during cutaneous laser therapies. Spraying a cryogen onto the skin surface creates a time-varying heat flux, effectively cooling the skin during and following the cryogen spurt. In previous studies mathematical models were developed to predict the human skin temperature profiles during the cryogen spraying time. However, no studies have accounted for the additional cooling due to residual cryogen left on the skin surface following the spurt termination. We formulate and solve an inverse heat conduction (IHC) problem to predict the time-varying surface heat flux both during and following a cryogen spurt. The IHC formulation uses measured temperature profiles from within a medium to estimate the surface heat flux. We implement a one-dimensional sequential function specification method (SFSM) to estimate the surface heat flux from internal temperatures measured within an in vitro model in response to a cryogen spurt. Solution accuracy and experimental errors are examined using simulated temperature data. Heat flux following spurt termination appears substantial; however, it is less than that during the spraying time. The estimated time-varying heat flux can subsequently be used in forward heat conduction models to estimate temperature profiles in skin during and following a cryogen spurt and predict appropriate timing for onset of the laser pulse.
Estimate of the global-scale joule heating rates in the thermosphere due to time mean currents
International Nuclear Information System (INIS)
Roble, R.G.; Matsushita, S.
1975-01-01
An estimate of the global-scale joule heating rates in the thermosphere is made based on derived global equivalent overhead electric current systems in the dynamo region during geomagnetically quiet and disturbed periods. The equivalent total electric field distribution is calculated from Ohm's law. The global-scale joule heating rates are calculated for various monthly average periods in 1965. The calculated joule heating rates maximize at high latitudes in the early evening and postmidnight sectors. During geomagnetically quiet times the daytime joule heating rates are considerably lower than heating by solar EUV radiation. However, during geomagnetically disturbed periods the estimated joule heating rates increase by an order of magnitude and can locally exceed the solar EUV heating rates. The results show that joule heating is an important and at times the dominant energy source at high latitudes. However, the global mean joule heating rates calculated near solar minimum are generally small compared to the global mean solar EUV heating rates. (auth)
Litt, Jonathan S.; Simo, Donald L.
2007-01-01
This paper presents a preliminary demonstration of an automated health assessment tool, capable of real-time on-board operation using existing engine control hardware. The tool allows operators to discern how rapidly individual turboshaft engines are degrading. As the compressor erodes, performance is lost, and with it the ability to generate power. Thus, such a tool would provide an instant assessment of the engine s fitness to perform a mission, and would help to pinpoint any abnormal wear or performance anomalies before they became serious, thereby decreasing uncertainty and enabling improved maintenance scheduling. The research described in the paper utilized test stand data from a T700-GE-401 turboshaft engine that underwent sand-ingestion testing to scale a model-based compressor efficiency degradation estimation algorithm. This algorithm was then applied to real-time Health Usage and Monitoring System (HUMS) data from a T700-GE-701C to track compressor efficiency on-line. The approach uses an optimal estimator called a Kalman filter. The filter is designed to estimate the compressor efficiency using only data from the engine s sensors as input.
Paci, Eugenio; Miccinesi, Guido; Puliti, Donella; Baldazzi, Paola; De Lisi, Vincenzo; Falcini, Fabio; Cirilli, Claudia; Ferretti, Stefano; Mangone, Lucia; Finarelli, Alba Carola; Rosso, Stefano; Segnan, Nereo; Stracci, Fabrizio; Traina, Adele; Tumino, Rosario; Zorzi, Manuel
2006-01-01
Introduction Excess of incidence rates is the expected consequence of service screening. The aim of this paper is to estimate the quota attributable to overdiagnosis in the breast cancer screening programmes in Northern and Central Italy. Methods All patients with breast cancer diagnosed between 50 and 74 years who were resident in screening areas in the six years before and five years after the start of the screening programme were included. We calculated a corrected-for-lead-time number of observed cases for each calendar year. The number of observed incident cases was reduced by the number of screen-detected cases in that year and incremented by the estimated number of screen-detected cases that would have arisen clinically in that year. Results In total we included 13,519 and 13,999 breast cancer cases diagnosed in the pre-screening and screening years, respectively. In total, the excess ratio of observed to predicted in situ and invasive cases was 36.2%. After correction for lead time the excess ratio was 4.6% (95% confidence interval 2 to 7%) and for invasive cases only it was 3.2% (95% confidence interval 1 to 6%). Conclusion The remaining excess of cancers after individual correction for lead time was lower than 5%. PMID:17147789
Beckmann, Kerri; Duffy, Stephen W; Lynch, John; Hiller, Janet; Farshid, Gelareh; Roder, David
2015-09-01
To estimate over-diagnosis due to population-based mammography screening using a lead time adjustment approach, with lead time measures based on symptomatic cancers only. Women aged 40-84 in 1989-2009 in South Australia eligible for mammography screening. Numbers of observed and expected breast cancer cases were compared, after adjustment for lead time. Lead time effects were modelled using age-specific estimates of lead time (derived from interval cancer rates and predicted background incidence, using maximum likelihood methods) and screening sensitivity, projected background breast cancer incidence rates (in the absence of screening), and proportions screened, by age and calendar year. Lead time estimates were 12, 26, 43 and 53 months, for women aged 40-49, 50-59, 60-69 and 70-79 respectively. Background incidence rates were estimated to have increased by 0.9% and 1.2% per year for invasive and all breast cancer. Over-diagnosis among women aged 40-84 was estimated at 7.9% (0.1-12.0%) for invasive cases and 12.0% (5.7-15.4%) when including ductal carcinoma in-situ (DCIS). We estimated 8% over-diagnosis for invasive breast cancer and 12% inclusive of DCIS cancers due to mammography screening among women aged 40-84. These estimates may overstate the extent of over-diagnosis if the increasing prevalence of breast cancer risk factors has led to higher background incidence than projected. © The Author(s) 2015.
Energy Technology Data Exchange (ETDEWEB)
Nishiwaki, Yasushi [Nuclear Reactor Laboratory, Tokyo Institute of Technology, Tokyo (Japan); Nuclear Reactor Laboratoroy, Kinki University, Fuse City, Osaka Precture (Japan)
1961-11-25
Since it has been observed in Spring of 1954 that a considerable amount of fission products mixture fell with the rain following a large scale nuclear detonation conducted in Bikini area in the South Pacific by the United States Atomic Energy Commission, it has become important, especially from the health physics standpoint, to estimate the effective average age of the fission products mixture after the nuclear detonation. If the energy transferred to the atmospheric air at the time of nuclear detonation is large enough (order of megaton at the distance of about 4000 km), the probable time and test site of nuclear detonation may be estimated with considerable accuracy, from the records of the pressure wave caused by the detonation in the microbarographs at different meteorological stations. Even in this case, in order to estimate the possible correlation between the artificial radioactivity observed in the rain and the probable detonation, it is often times desirable to estimate the effective age of the fission products mixture in the rain from the decay measurement of the radioactivity.
International Nuclear Information System (INIS)
Nishiwaki, Yasushi
1961-01-01
Since it has been observed in Spring of 1954 that a considerable amount of fission products mixture fell with the rain following a large scale nuclear detonation conducted in Bikini area in the South Pacific by the United States Atomic Energy Commission, it has become important, especially from the health physics standpoint, to estimate the effective average age of the fission products mixture after the nuclear detonation. If the energy transferred to the atmospheric air at the time of nuclear detonation is large enough (order of megaton at the distance of about 4000 km), the probable time and test site of nuclear detonation may be estimated with considerable accuracy, from the records of the pressure wave caused by the detonation in the microbarographs at different meteorological stations. Even in this case, in order to estimate the possible correlation between the artificial radioactivity observed in the rain and the probable detonation, it is often times desirable to estimate the effective age of the fission products mixture in the rain from the decay measurement of the radioactivity
Wyss, M.
2012-12-01
Estimating human losses within less than an hour worldwide requires assumptions and simplifications. Earthquake for which losses are accurately recorded after the event provide clues concerning the influence of error sources. If final observations and real time estimates differ significantly, data and methods to calculate losses may be modified or calibrated. In the case of the earthquake in the Emilia Romagna region with M5.9 on May 20th, the real time epicenter estimates of the GFZ and the USGS differed from the ultimate location by the INGV by 6 and 9 km, respectively. Fatalities estimated within an hour of the earthquake by the loss estimating tool QLARM, based on these two epicenters, numbered 20 and 31, whereas 7 were reported in the end, and 12 would have been calculated if the ultimate epicenter released by INGV had been used. These four numbers being small, do not differ statistically. Thus, the epicenter errors in this case did not appreciably influence the results. The QUEST team of INGV has reported intensities with I ≥ 5 at 40 locations with accuracies of 0.5 units and QLARM estimated I > 4.5 at 224 locations. The differences between the observed and calculated values at the 23 common locations show that the calculation in the 17 instances with significant differences were too high on average by one unit. By assuming higher than average attenuation within standard bounds for worldwide loss estimates, the calculated intensities model the observed ones better: For 57% of the locations, the difference was not significant; for the others, the calculated intensities were still somewhat higher than the observed ones. Using a generic attenuation law with higher than average attenuation, but not tailored to the region, the number of estimated fatalities becomes 12 compared to 7 reported ones. Thus, attenuation in this case decreased the discrepancy between observed and reported death by approximately a factor of two. The source of the fatalities is
Estimating Global Burden of Disease due to congenital anomaly
DEFF Research Database (Denmark)
Boyle, Breidge; Addor, Marie-Claude; Arriola, Larraitz
2018-01-01
OBJECTIVE: To validate the estimates of Global Burden of Disease (GBD) due to congenital anomaly for Europe by comparing infant mortality data collected by EUROCAT registries with the WHO Mortality Database, and by assessing the significance of stillbirths and terminations of pregnancy for fetal...... the burden of disease due to congenital anomaly, and thus declining YLL over time may obscure lack of progress in primary, secondary and tertiary prevention....
Estimating mortality due to cigarette smoking
DEFF Research Database (Denmark)
Brønnum-Hansen, H; Juel, K
2000-01-01
We estimated the mortality from various diseases caused by cigarette smoking using two methods and compared the results. In one method, the "Prevent" model is used to simulate the effect on mortality of the prevalence of cigarette smoking derived retrospectively. The other method, suggested by R....... Peto et al (Lancet 1992;339:1268-1278), requires data on mortality from lung cancer among people who have never smoked and among smokers, but it does not require data on the prevalence of smoking. In the Prevent model, 33% of deaths among men and 23% of those among women in 1993 from lung cancer...... are small and appear to be explicable. The Prevent model can be used for more general scenarios of effective health promotion, but it requires more data than the Peto et al method, which can be used only to estimate mortality related to smoking....
Travel time estimation using Bluetooth.
2015-06-01
The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...
Hygienic estimation of population doses due to stratospheric fallout
International Nuclear Information System (INIS)
Marej, A.N.; Knizhnikov, V.A.
1980-01-01
The hygienic estimation of external and internal irradiation of the USSR population due to stratospheric global fallouts of fission products after nuclear explosions and weapon tests, is carried out. Numerical values which characterize the dose-effect dependence in the case of radiation of marrow, bone tissue and whole body are presented. Values of mean individual and population doses of irradiation due to global fallouts within 1963-1975, types of injury and the number of mortal cases due to malignant neoplasms are presented. A conclusion is made that the contribution of radiation due to stratospheric fallouts in the mortality due to malignant neoplasms is insignificant. Annual radiation doses, conditioned by global fallouts within the period of 1963-1975 constitute but several percent from the dose of radiation of the natural radiation background. Results of estimation of genetic consequences of irradiation due to atmospheric fallouts are presented
Sensitivity of Process Design due to Uncertainties in Property Estimates
DEFF Research Database (Denmark)
Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent
2012-01-01
The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design....
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
HIV due to female sex work: regional and global estimates.
Directory of Open Access Journals (Sweden)
Annette Prüss-Ustün
Full Text Available Female sex workers (FSWs are at high risk of HIV infection. Our objective was to determine the proportion of HIV prevalence in the general female adult population that is attributable to the occupational exposure of female sex work, due to unprotected sexual intercourse.Population attributable fractions of HIV prevalence due to female sex work were estimated for 2011. A systematic search was conducted to retrieve required input data from available sources. Data gaps of HIV prevalence in FSWs for 2011 were filled using multilevel modeling and multivariate linear regression. The fraction of HIV attributable to female sex work was estimated as the excess HIV burden in FSWs deducting the HIV burden in FSWs due to injecting drug use.An estimated fifteen percent of HIV in the general female adult population is attributable to (unsafe female sex work. The region with the highest attributable fraction is Sub Saharan Africa, but the burden is also substantial for the Caribbean, Latin America and South and Southeast Asia. We estimate 106,000 deaths from HIV are a result of female sex work globally, 98,000 of which occur in Sub-Saharan Africa. If HIV prevalence in other population groups originating from sexual contact with FSWs had been considered, the overall attributable burden would probably be much larger.Female sex work is an important contributor to HIV transmission and the global HIV burden. Effective HIV prevention measures exist and have been successfully targeted at key populations in many settings. These must be scaled up.FSWs suffer from high HIV burden and are a crucial core population for HIV transmission. Surveillance, prevention and treatment of HIV in FSWs should benefit both this often neglected vulnerable group and the general population.
HIV Due to Female Sex Work: Regional and Global Estimates
Prüss-Ustün, Annette; Wolf, Jennyfer; Driscoll, Tim; Degenhardt, Louisa; Neira, Maria; Calleja, Jesus Maria Garcia
2013-01-01
Introduction Female sex workers (FSWs) are at high risk of HIV infection. Our objective was to determine the proportion of HIV prevalence in the general female adult population that is attributable to the occupational exposure of female sex work, due to unprotected sexual intercourse. Methods Population attributable fractions of HIV prevalence due to female sex work were estimated for 2011. A systematic search was conducted to retrieve required input data from available sources. Data gaps of HIV prevalence in FSWs for 2011 were filled using multilevel modeling and multivariate linear regression. The fraction of HIV attributable to female sex work was estimated as the excess HIV burden in FSWs deducting the HIV burden in FSWs due to injecting drug use. Results An estimated fifteen percent of HIV in the general female adult population is attributable to (unsafe) female sex work. The region with the highest attributable fraction is Sub Saharan Africa, but the burden is also substantial for the Caribbean, Latin America and South and Southeast Asia. We estimate 106,000 deaths from HIV are a result of female sex work globally, 98,000 of which occur in Sub-Saharan Africa. If HIV prevalence in other population groups originating from sexual contact with FSWs had been considered, the overall attributable burden would probably be much larger. Discussion Female sex work is an important contributor to HIV transmission and the global HIV burden. Effective HIV prevention measures exist and have been successfully targeted at key populations in many settings. These must be scaled up. Conclusion FSWs suffer from high HIV burden and are a crucial core population for HIV transmission. Surveillance, prevention and treatment of HIV in FSWs should benefit both this often neglected vulnerable group and the general population. PMID:23717432
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Estimates of expansion time scales
International Nuclear Information System (INIS)
Jones, E.M.
1979-01-01
Monte Carlo simulations of the expansion of a spacefaring civilization show that descendants of that civilization should be found near virtually every useful star in the Galaxy in a time much less than the current age of the Galaxy. Only extreme assumptions about local population growth rates, emigration rates, or ship ranges can slow or halt an expansion. The apparent absence of extraterrestrials from the solar system suggests that no such civilization has arisen in the Galaxy. 1 figure
Fatigue life estimation on coke drum due to cycle optimization
Siahaan, Andrey Stephan; Ambarita, Himsar; Kawai, Hideki; Daimaruya, Masashi
2018-04-01
In the last decade, due to the increasing demand of petroleum product, the necessity for converting the heavy oil are increasing. Thus, demand for installing coke drum in whole world will be increase. The coke drum undergoes the cyclic high temperature and suddenly cooling but in fact is not designed to withstand that kind of cycle, thus the operational life of coke drum is much shorter in comparison to other equipment in oil refinery. Various factors determine in order to improve reliability and minimize the down time, and it is found that the cycle optimization due to cycle, temperature, and pressure have an important role. From this research it is found that the fatigue life of the short cycle is decrease by a half compare to the normal cycle. It also found that in the preheating stage, the stress peak is far exceed the yield strength of coke drum material and fall into plastic deformation. This is happened because of the temperature leap in the preheating stage that cause thermal shock in the upper part of the skirt of the coke drum.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate
Global Population Density Grid Time Series Estimates
National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...
Dynamic travel time estimation using regression trees.
2008-10-01
This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...
Accuracy of prehospital transport time estimation.
Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W
2014-01-01
Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Estimation of Concrete Corrosion Due to Attack of Chloride Salt
Directory of Open Access Journals (Sweden)
V. V. Babitski
2005-01-01
Full Text Available The paper provides results of experimental concrete research under conditions of concentrated chloride salt solutions. General principles of forecasting concrete corrosion resistance under salt physical corrosion are given in the paper. Analytical dependences for quantitative estimation of corroded concrete have been obtained.
Estimated Incident Cost Savings in Shipping Due to Inspections
S. Knapp (Sabine); G.E. Bijwaard (Govert); C. Heij (Christiaan)
2010-01-01
textabstractThe effectiveness of safety inspections has been analysed from various angles, but until now, relatively little attention has been given to translate risk reduction into incident cost savings. This paper quantifies estimated cost savings based on port state control inspections and
Response of orthotropic micropolar elastic medium due to time ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
namic response of anisotropic continuum has received the attention of ... linear theory of micropolar elasticity and bending of orthotropic micropolar ... medium due to time harmonic concentrated load, the continuum is divided into two half-.
Time Estimation Deficits in Childhood Mathematics Difficulties
Hurks, Petra P. M.; van Loosbroek, Erik
2014-01-01
Time perception has not been comprehensively examined in mathematics difficulties (MD). Therefore, verbal time estimation, production, and reproduction were tested in 13 individuals with MD and 16 healthy controls, matched for age, sex, and intellectual skills. Individuals with MD performed comparably to controls in time reproduction, but showed a…
Estimated incident cost savings in shipping due to inspections.
Knapp, Sabine; Bijwaard, Govert; Heij, Christiaan
2011-07-01
The effectiveness of safety inspections of ships has been analysed from various angles, but until now, relatively little attention has been given to translate risk reduction into incident cost savings. This paper provides a monetary quantification of the cost savings that can be attributed to port state control inspections and industry vetting inspections. The dataset consists of more than half a million ship arrivals between 2002 and 2007 and contains inspections of port state authorities in the USA and Australia and of three industry vetting regimes. The effect of inspections in reducing the risk of total loss accidents is estimated by means of duration models, in terms of the gained probability of survival. The monetary benefit of port state control inspections is estimated to range, on average, from about 70 to 190 thousand dollars, with median values ranging from about 20 to 45 thousand dollars. Industry inspections have even higher benefits, especially for tankers. The savings are in general higher for older and larger vessels, and also for vessels with undefined flag and unknown classification society. As inspection costs are relatively low in comparison to potential cost savings, the results underline the importance of determining ships with relatively high risk of total loss. Copyright © 2011 Elsevier Ltd. All rights reserved.
estimating formwork striking time for concrete mixes estimating
African Journals Online (AJOL)
eobe
In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.
Highway travel time estimation with data fusion
Soriguera Martí, Francesc
2016-01-01
This monograph presents a simple, innovative approach for the measurement and short-term prediction of highway travel times based on the fusion of inductive loop detector and toll ticket data. The methodology is generic and not technologically captive, allowing it to be easily generalized for other equivalent types of data. The book shows how Bayesian analysis can be used to obtain fused estimates that are more reliable than the original inputs, overcoming some of the drawbacks of travel-time estimations based on unique data sources. The developed methodology adds value and obtains the maximum (in terms of travel time estimation) from the available data, without recurrent and costly requirements for additional data. The application of the algorithms to empirical testing in the AP-7 toll highway in Barcelona proves that it is possible to develop an accurate real-time, travel-time information system on closed-toll highways with the existing surveillance equipment, suggesting that highway operators might provide...
Time Delay Estimation Algoritms for Echo Cancellation
Directory of Open Access Journals (Sweden)
Kirill Sakhnov
2011-01-01
Full Text Available The following case study describes how to eliminate echo in a VoIP network using delay estimation algorithms. It is known that echo with long transmission delays becomes more noticeable to users. Thus, time delay estimation, as a part of echo cancellation, is an important topic during transmission of voice signals over packetswitching telecommunication systems. An echo delay problem associated with IP-based transport networks is discussed in the following text. The paper introduces the comparative study of time delay estimation algorithm, used for estimation of the true time delay between two speech signals. Experimental results of MATLab simulations that describe the performance of several methods based on cross-correlation, normalized crosscorrelation and generalized cross-correlation are also presented in the paper.
Accuracy of Travel Time Estimation using Bluetooth Technology
DEFF Research Database (Denmark)
Araghi, Bahar Namaki; Skoven Pedersen, Kristian; Tørholm Christensen, Lars
2012-01-01
Short-term travel time information plays a critical role in Advanced Traffic Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS). In this context, the need for accurate and reliable travel time information sources is becoming increasingly important. Bluetooth Technology (BT......) has been used as a relatively new cost-effective source of travel time estimation. However, due to low sampling rate of BT compared to other sensor technologies, existence of outliers may significantly affect the accuracy and reliability of the travel time estimates obtained using BT. In this study......, the concept of outliers and corresponding impacts on travel time accuracy are discussed. Four different estimators named Min-BT, Max-BT, Med-BT and Avg-BT with different outlier detection logic are presented in this paper. These methods are used to estimate travel times using a BT derived dataset. In order...
Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A
2017-09-01
Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).
Induced voltage due to time-dependent magnetisation textures
International Nuclear Information System (INIS)
Kudtarkar, Santosh Kumar; Dhadwal, Renu
2010-01-01
We determine the induced voltage generated by spatial and temporal magnetisation textures (inhomogeneities) in metallic ferromagnets due to the spin diffusion of non-equilibrium electrons. Using time dependent semi-classical theory as formulated in Zhang and Li and the drift-diffusion model of transport it is shown that the voltage generated depends critically on the difference in the diffusion constants of up and down spins. Including spin relaxation results in a crucial contribution to the induced voltage. We also show that the presence of magnetisation textures results in the modification of the conductivity of the system. As an illustration, we calculate the voltage generated due to a time dependent field driven helimagnet by solving the Landau-Lifshitz equation with Gilbert damping and explicitly calculate the dependence on the relaxation and damping parameters.
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
SU-E-T-95: Delivery Time Estimator
International Nuclear Information System (INIS)
Kantor, M; Balter, P; Ohrt, J
2014-01-01
Purpose: The development and testing of a tool for the inclusion of delivery time as a parameter in plan optimization. Methods: We developed an algorithm that estimates the time required for the machine and personnel movements required to deliver a treatment plan on a linear accelerator. We included dose rate, leaf motion, collimator motion, gantry motion, and couch motions (including time to enter the room to rotate the couch safely). Vault-specific parameters to account for time to enter and perform couch angle adjustments were also included. This algorithm works for static, step and shoot IMRT, and VMAT beams photon beams and for fixed electron beams. This was implemented as a script in our treatment planning system. We validated the estimator against actual recorded delivery time from our R and V system as well as recorded times from our IMRT QA delivery. Results: Data was collected (Figure 1) for 12 treatment plans by examining the R and V beam start times, and by manually timing the QA treatment for a reference, but the QA measurements were only significant to the nearest minute. The average difference between the estimated and R and V times was 15%, and 11% when excluding the major outliers. Outliers arose due to respiratory aides and gating techniques which could not be accounted for in the estimator. Conclusion: Non-mechanical factors such as the time a therapist needs to walk in and out of the room to adjust the couch needed to be fine-tuned and cycled back into the algorithm to improve the estimate. The algorithm has been demonstrated to provide reasonable and useful estimates for delivery time. This estimate has provided a useful additional input for clinical decision-making when comparing several potential radiation treatment options
International Nuclear Information System (INIS)
Yamaguchi, Ichiro
2012-01-01
Explained are the purpose of dose assessment, its methods, actual radionuclide levels in food, amounts of food intake, dose estimated hitherto, dose in the future, dose estimated by total food studies, and problems of assessing the dose from food, all of which Tokyo Electric Power Company (TEPCO) Power Station Accident has raised. Dose derived from food can be estimated by the radioactivity measured in each food material and in its combined amounts or in actually cooked food. Amounts of radioactive materials ingested in the body can be measured externally or by bioassay. Japan MHLW published levels of radioactivity in vegetables', fruits, marine products and meats from Mar. 2011, of which time course pattern has been found different each other within and between month(s). Dose due to early exposure in the Accident can be estimated by the radioactivity levels above and data concerning the amounts of food intake summarized by National Institute of Health and Nutrition in 2010 and other institutions. For instance, the thyroid tissue equivalent dose by I-131 in a 1 year old child is estimated to be 1.1-5 mSv depending on the assumed data for calculation, in the first month after the Accident when ICRP tissue equivalent dose coefficient 3.7 x 10-6 Sv/Bq is used. In the future (later than Apr. 2012), new standard limits of radiocesium levels in milk/its products and foods for infant and in other general foods are to be defined 50 and 100 Bq/kg, respectively. The distribution of committed effective doses by radiocesium (mSv/y food intake) are presented as an instance, where it is estimated by 1 million stochastic simulations using 2 covariates of Cs-134, -137 levels (as representative nuclides under regulation) in food and of daily food intake. In dose prediction, conjecturing the behavior of environmental radionuclides and the time of resume of primary industries would be necessary. (T.T.)
Estimation of Surface Deformation due to Pasni Earthquake Using SAR Interferometry
Ali, M.; Shahzad, M. I.; Nazeer, M.; Kazmi, J. H.
2018-04-01
Earthquake cause ground deformation in sedimented surface areas like Pasni and that is a hazard. Such earthquake induced ground displacements can seriously damage building structures. On 7 February 2017, an earthquake with 6.3 magnitudes strike near to Pasni. We have successfully distinguished widely spread ground displacements for the Pasni earthquake by using InSAR-based analysis with Sentinel-1 satellite C-band data. The maps of surface displacement field resulting from the earthquake are generated. Sentinel-1 Wide Swath data acquired from 9 December 2016 to 28 February 2017 was used to generate displacement map. The interferogram revealed the area of deformation. The comparison map of interferometric vertical displacement in different time period was treated as an evidence of deformation caused by earthquake. Profile graphs of interferogram were created to estimate the vertical displacement range and trend. Pasni lies in strong earthquake magnitude effected area. The major surface deformation areas are divided into different zones based on significance of deformation. The average displacement in Pasni is estimated about 250 mm. Maximum pasni area is uplifted by earthquake and maximum uplifting occurs was about 1200 mm. Some of areas was subsidized like the areas near to shoreline and maximum subsidence was estimated about 1500 mm. Pasni is facing many problems due to increasing sea water intrusion under prevailing climatic change where land deformation due to a strong earthquake can augment its vulnerability.
REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO
Directory of Open Access Journals (Sweden)
M. S. Temiz
2012-07-01
Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.
Estimating Global Burden of Disease due to congenital anomaly: an analysis of European data
Boyle, Breidge; Addor, Marie-Claude; Arriola, Larraitz; Barisic, Ingeborg; Bianchi, Fabrizio; Csáky-Szunyogh, Melinda; de Walle, Hermien E K; Dias, Carlos Matias; Draper, Elizabeth; Gatt, Miriam; Garne, Ester; Haeusler, Martin; Källén, Karin; Latos-Bielenska, Anna; McDonnell, Bob; Mullaney, Carmel; Nelen, Vera; Neville, Amanda J; O’Mahony, Mary; Queisser-Wahrendorf, Annette; Randrianaivo, Hanitra; Rankin, Judith; Rissmann, Anke; Ritvanen, Annukka; Rounding, Catherine; Tucker, David; Verellen-Dumoulin, Christine; Wellesley, Diana; Wreyford, Ben; Zymak-Zakutnia, Natalia; Dolk, Helen
2018-01-01
Objective To validate the estimates of Global Burden of Disease (GBD) due to congenital anomaly for Europe by comparing infant mortality data collected by EUROCAT registries with the WHO Mortality Database, and by assessing the significance of stillbirths and terminations of pregnancy for fetal anomaly (TOPFA) in the interpretation of infant mortality statistics. Design, setting and outcome measures EUROCAT is a network of congenital anomaly registries collecting data on live births, fetal deaths from 20 weeks’ gestation and TOPFA. Data from 29 registries in 19 countries were analysed for 2005–2009, and infant mortality (deaths of live births at age congenital anomaly. In 11 EUROCAT countries, average infant mortality with congenital anomaly was 1.1 per 1000 births, with higher rates where TOPFA is illegal (Malta 3.0, Ireland 2.1). The rate of stillbirths with congenital anomaly was 0.6 per 1000. The average TOPFA prevalence was 4.6 per 1000, nearly three times more prevalent than stillbirths and infant deaths combined. TOPFA also impacted on the prevalence of postneonatal survivors with non-lethal congenital anomaly. Conclusions By excluding TOPFA and stillbirths from GBD years of life lost (YLL) estimates, GBD underestimates the burden of disease due to congenital anomaly, and thus declining YLL over time may obscure lack of progress in primary, secondary and tertiary prevention. PMID:28667189
Estimating Global Burden of Disease due to congenital anomaly: an analysis of European data.
Boyle, Breidge; Addor, Marie-Claude; Arriola, Larraitz; Barisic, Ingeborg; Bianchi, Fabrizio; Csáky-Szunyogh, Melinda; de Walle, Hermien E K; Dias, Carlos Matias; Draper, Elizabeth; Gatt, Miriam; Garne, Ester; Haeusler, Martin; Källén, Karin; Latos-Bielenska, Anna; McDonnell, Bob; Mullaney, Carmel; Nelen, Vera; Neville, Amanda J; O'Mahony, Mary; Queisser-Wahrendorf, Annette; Randrianaivo, Hanitra; Rankin, Judith; Rissmann, Anke; Ritvanen, Annukka; Rounding, Catherine; Tucker, David; Verellen-Dumoulin, Christine; Wellesley, Diana; Wreyford, Ben; Zymak-Zakutnia, Natalia; Dolk, Helen
2018-01-01
To validate the estimates of Global Burden of Disease (GBD) due to congenital anomaly for Europe by comparing infant mortality data collected by EUROCAT registries with the WHO Mortality Database, and by assessing the significance of stillbirths and terminations of pregnancy for fetal anomaly (TOPFA) in the interpretation of infant mortality statistics. EUROCAT is a network of congenital anomaly registries collecting data on live births, fetal deaths from 20 weeks' gestation and TOPFA. Data from 29 registries in 19 countries were analysed for 2005-2009, and infant mortality (deaths of live births at age congenital anomaly. In 11 EUROCAT countries, average infant mortality with congenital anomaly was 1.1 per 1000 births, with higher rates where TOPFA is illegal (Malta 3.0, Ireland 2.1). The rate of stillbirths with congenital anomaly was 0.6 per 1000. The average TOPFA prevalence was 4.6 per 1000, nearly three times more prevalent than stillbirths and infant deaths combined. TOPFA also impacted on the prevalence of postneonatal survivors with non-lethal congenital anomaly. By excluding TOPFA and stillbirths from GBD years of life lost (YLL) estimates, GBD underestimates the burden of disease due to congenital anomaly, and thus declining YLL over time may obscure lack of progress in primary, secondary and tertiary prevention. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Time-Distance Helioseismology: Noise Estimation
Gizon, L.; Birch, A. C.
2004-10-01
As in global helioseismology, the dominant source of noise in time-distance helioseismology measurements is realization noise due to the stochastic nature of the excitation mechanism of solar oscillations. Characterizing noise is important for the interpretation and inversion of time-distance measurements. In this paper we introduce a robust definition of travel time that can be applied to very noisy data. We then derive a simple model for the full covariance matrix of the travel-time measurements. This model depends only on the expectation value of the filtered power spectrum and assumes that solar oscillations are stationary and homogeneous on the solar surface. The validity of the model is confirmed through comparison with SOHO MDI measurements in a quiet-Sun region. We show that the correlation length of the noise in the travel times is about half the dominant wavelength of the filtered power spectrum. We also show that the signal-to-noise ratio in quiet-Sun travel-time maps increases roughly as the square root of the observation time and is at maximum for a distance near half the length scale of supergranulation.
Travel Time Estimation on Urban Street Segment
Directory of Open Access Journals (Sweden)
Jelena Kajalić
2018-02-01
Full Text Available Level of service (LOS is used as the main indicator of transport quality on urban roads and it is estimated based on the travel speed. The main objective of this study is to determine which of the existing models for travel speed calculation is most suitable for local conditions. The study uses actual data gathered in travel time survey on urban streets, recorded by applying second by second GPS data. The survey is limited to traffic flow in saturated conditions. The RMSE method (Root Mean Square Error is used for research results comparison with relevant models: Akcelik, HCM (Highway Capacity Manual, Singapore model and modified BPR (the Bureau of Public Roads function (Dowling - Skabardonis. The lowest deviation in local conditions for urban streets with standardized intersection distance (400-500 m is demonstrated by Akcelik model. However, for streets with lower signal density (<1 signal/km the correlation between speed and degree of saturation is best presented by HCM and Singapore model. According to test results, Akcelik model was adopted for travel speed estimation which can be the basis for determining the level of service in urban streets with standardized intersection distance and coordinated signal timing under local conditions.
Estimated time of arrival and debiasing the time saving bias.
Eriksson, Gabriella; Patten, Christopher J D; Svenson, Ola; Eriksson, Lars
2015-01-01
The time saving bias predicts that the time saved when increasing speed from a high speed is overestimated, and underestimated when increasing speed from a slow speed. In a questionnaire, time saving judgements were investigated when information of estimated time to arrival was provided. In an active driving task, an alternative meter indicating the inverted speed was used to debias judgements. The simulated task was to first drive a distance at a given speed, and then drive the same distance again at the speed the driver judged was required to gain exactly 3 min in travel time compared with the first drive. A control group performed the same task with a speedometer and saved less than the targeted 3 min when increasing speed from a high speed, and more than 3 min when increasing from a low speed. Participants in the alternative meter condition were closer to the target. The two studies corroborate a time saving bias and show that biased intuitive judgements can be debiased by displaying the inverted speed. Practitioner Summary: Previous studies have shown a cognitive bias in judgements of the time saved by increasing speed. This simulator study aims to improve driver judgements by introducing a speedometer indicating the inverted speed in active driving. The results show that the bias can be reduced by presenting the inverted speed and this finding can be used when designing in-car information systems.
Time estimation in mild Alzheimer's disease patients
Directory of Open Access Journals (Sweden)
Nichelli Paolo
2009-07-01
Full Text Available Abstract Background Time information processing relies on memory, which greatly supports the operations of hypothetical internal timekeepers. Scalar Expectancy Theory (SET postulates the existence of a memory component that is functionally separated from an internal clock and other processing stages. SET has devised several experimental procedures to map these cognitive stages onto cerebral regions and neurotransmitter systems. One of these, the time bisection procedure, has provided support for a dissociation between the clock stage, controlled by dopaminergic systems, and the memory stage, mainly supported by cholinergic neuronal networks. This study aimed at linking the specific memory processes predicted by SET to brain mechanisms, by submitting time bisection tasks to patients with probable Alzheimer's disease (AD, that are known to present substantial degeneration of the fronto-temporal regions underpinning memory. Methods Twelve mild AD patients were required to make temporal judgments about intervals either ranging from 100 to 600 ms (short time bisection task or from 1000 to 3000 ms (long time bisection task. Their performance was compared with that of a group of aged-matched control participants and a group of young control subjects. Results Long time bisection scores of AD patients were not significantly different from those of the two control groups. In contrast, AD patients showed increased variability (as indexed by increased WR values in timing millisecond durations and a generalized inconsistency of responses over the same interval in both the short and long bisection tasks. A similar, though milder, decreased millisecond interval sensitivity was found for elderly subjects. Conclusion The present results, that are consistent with those of previous timing studies in AD, are interpreted within the SET framework as not selectively dependent on working or reference memory disruptions but as possibly due to distortions in different
On the fast estimation of transit times application to BWR simulated data
International Nuclear Information System (INIS)
Antonopoulos-Domis, M.; Marseguerra, M.; Padovani, E.
1996-01-01
Real time estimators of transit times are proposed. BWR noise is simulated including a global component due to rod vibration. The time obtained form the simulation is used to investigate the robustness and noise immunity of the estimators. It is found that, in presence of a coincident (global) signal, the cross-correlation function is the worst estimator. (authors)
Estimating anesthesia and surgical procedure times from medicare anesthesia claims.
Silber, Jeffrey H; Rosenbaum, Paul R; Zhang, Xuemei; Even-Shoshan, Orit
2007-02-01
Procedure times are important variables that often are included in studies of quality and efficiency. However, due to the need for costly chart review, most studies are limited to single-institution analyses. In this article, the authors describe how well the anesthesia claim from Medicare can estimate chart times. The authors abstracted information on time of induction and entrance to the recovery room ("anesthesia chart time") from the charts of 1,931 patients who underwent general and orthopedic surgical procedures in Pennsylvania. The authors then merged the associated bills from claims data supplied from Medicare (Part B data) that included a variable denoting the time in minutes for the anesthesia service. The authors also investigated the time from incision to closure ("surgical chart time") on a subset of 1,888 patients. Anesthesia claim time from Medicare was highly predictive of anesthesia chart time (Kendall's rank correlation tau = 0.85, P < 0.0001, median absolute error = 5.1 min) but somewhat less predictive of surgical chart time (Kendall's tau = 0.73, P < 0.0001, median absolute error = 13.8 min). When predicting chart time from Medicare bills, variables reflecting procedure type, comorbidities, and hospital type did not significantly improve the prediction, suggesting that errors in predicting the chart time from the anesthesia bill time are not related to these factors; however, the individual hospital did have some influence on these estimates. Anesthesia chart time can be well estimated using Medicare claims, thereby facilitating studies with vastly larger sample sizes and much lower costs of data collection.
International Nuclear Information System (INIS)
Overcamp, T.J.; Fjeld, R.A.
1987-01-01
A simple approximation for estimating the centerline gamma absorbed dose rates due to a continuous Gaussian plume was developed. To simplify the integration of the dose integral, this approach makes use of the Gaussian cloud concentration distribution. The solution is expressed in terms of the I1 and I2 integrals which were developed for estimating long-term dose due to a sector-averaged Gaussian plume. Estimates of tissue absorbed dose rates for the new approach and for the uniform cloud model were compared to numerical integration of the dose integral over a Gaussian plume distribution
Error due to unresolved scales in estimation problems for atmospheric data assimilation
Janjic, Tijana
The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only
Estimation of health damage due to emission of air pollutants by cars: the canyon effect
Energy Technology Data Exchange (ETDEWEB)
Spadaro, J.V. [Ecole des Mines, Centre d' Energetique, Paris, 75 (France); Rabl, A.
1999-07-01
Since current epidemiological evidence suggests that air pollution has harmful effects even at typical ambient concentrations and the dispersion is significant over hundreds to thousands of km, the estimation of total health damage involves consideration of local and regional effects. In recent years, several estimates have been published of health damage due to air pollution from cars, in particular by Delucchi et al of UC Davis and by the ExternE Project of the European Commission. To capture the geographic extent of pollutant dispersion, local and regional models have been used in combination. The present paper addresses a potentially significant contribution of the total damage, not yet taken into account in these studies: the increased concentration of pollutants inside urban street canyons. This canyon effect is appreciable only for primary pollutants, the time constants for the formation of secondary pollutants being long compared to the residence time in the canyon. We assumed linearity of incremental health impact with incremental concentration, in view of the lack of epidemiological evidence for no-effect thresholds or significant deviations from linearity at typical ambient concentrations; therefore, only long term average concentrations matter. We use the FLUENT software to model the dispersion inside a street canyon for a wide range of rectangular geometries and wind velocities. Our results suggest that the canyon effect is of marginal significance for total damages, the contribution of the canyon effect being roughly 10 to 20% of the total. The relative importance of the canyon effect is, of course, highly variable with local conditions; it could be much smaller but it is unlikely to add more than 100% to the flat terrain estimate. (Author)
International Nuclear Information System (INIS)
Kalef-Ezra, J.A.
1997-01-01
The 1986 nuclear reactor accident at Chernobyl resulted in widespread internal contamination by radioactive caesium. The aim of the present study was to estimate the doses to embryos/fetus in Greece attributed to maternal 134 Cs and 137 Cs intake and the consequent health risks to their offspring. In pregnant women the concentration of total-body caesium (TBCs) was lower than in age-matched non-pregnant women measured during the same month. A detailed study of intake and retention in the members of one family carried out during the three years that followed the accident indicated that the biological half-time of caesium in the women decreased by a factor of two shortly after conception. Then at partus, there was an increase in the biological half-time, reaching a value similar to that before conception. The total-body potassium concentration was constant over the entire period. Doses to the embryo/fetus due to maternal intake was estimated to be about 150 μGy maximally in those conceived between November 1986 and March 1987. When conception took place later, the prenatal dose followed an exponential reduction with a half-time of about 170 d. These prenatal doses do not exceed the doses from either the natural internal potassium, or from the usual external background sources. The risks attributed to maternal 134 Cs and 137 Cs intake were considerably lower than levels that would justify consideration of termination of a pregnancy. In the absence of these data however, 2500 otherwise wanted pregnancies in Greece were terminated following the Chernobyl accident. (author)
Estimation of fuel loss due to idling of vehicles at a signalized intersection in Chennai, India
Vasantha Kumar, S.; Gulati, Himanshu; Arora, Shivam
2017-11-01
The vehicles while waiting at signalized intersections are generally found to be in idling condition, i.e., not switching off their vehicles during red times. This phenomenon of idling of vehicles during red times at signalized intersections may lead to huge economic loss as lot of fuel is consumed by vehicles when they are in idling condition. The situation may even be worse in countries like India as different vehicle types consume varying amount of fuel. Only limited studies have been reported on estimation of fuel loss due to idling of vehicles in India. In the present study, one of the busy intersections in Chennai, namely, Tidel Park Junction in Rajiv Gandhi salai was considered. Data collection was carried out in one approach road of the intersection during morning and evening peak hours on a typical working day by manually noting down the red timings of each cycle and the corresponding number of two-wheelers, three-wheelers, passenger cars, light commercial vehicles (LCV) and heavy motorized vehicles (HMV) that were in idling mode. Using the fuel consumption values of various vehicles types suggested by Central Road Research Institute (CRRI), the total fuel loss during the study period was found to be Rs. 4,93,849/-. The installation of red timers, synchronization of signals, use of non-motorized transport for short trips and public awareness are some of the measures which government need to focus to save the fuel wasted at signalized intersections in major cities of India.
Covariance matrix estimation for stationary time series
Xiao, Han; Wu, Wei Biao
2011-01-01
We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...
Estimation of economic losses due to Peste de Petits Ruminants in small ruminants in India
Directory of Open Access Journals (Sweden)
B. Singh
2014-04-01
Full Text Available Aim: To develop a simple mathematical model to assess the losses due to peste des petits ruminants (PPR in small ruminants in India. Materials and Methods: The study was based on cases and deaths in goats and sheep due to PPR from the average combined data on ovine/caprine as published by Government of India for the last 5 years (2008-2012. All possible direct and indirect losses due to the disease, viz. mortality losses, losses due to direct reduction in milk/wool yield, losses due to reproduction failure, body weight losses, treatment costs and opportunity costs, were considered to provide estimate of annual economic losses due to PPR in sheep and goats in India. Based on cases and deaths as reported in sample survey studies, the annual economic loss was also estimated. Results: On the basis of data reported by Government of India, the study has shown average annual economic loss of Rs. 167.83 lacs, of which Rs. 125.67 lacs and Rs. 42.16 lacs respectively are due to the incidence of the disease in goats and sheep. Morbidity losses constituted the greater share of the total loss in both goats and sheep (56.99% and 61.34%, respectively. Among different components of morbidity loss, direct body weight loss was the most significant in both goats and sheep. Based on cases and deaths as reported in sample survey studies, the estimated annual economic loss due to PPR in goats and sheep is Rs. 8895.12 crores, of which Rs. 5477.48 and Rs. 3417.64 crores respectively are due to the disease in goats and sheep. Conclusion: The low economic losses as reported based on Government of India data points towards underreporting of cases and deaths due to the disease. The study thus revealed a significant loss due to PPR in small ruminants on a large scale.
Radiochemical separation and effective dose estimation due to ingestion of 90Sr
International Nuclear Information System (INIS)
Ilic, Z.; Vidic, A.; Deljkic, D.; Sirko, D.; Zovko, E.; Samek, D.
2009-01-01
Since 2007. Institute for Public Health of Federation of Bosnia and Herzegovina-Radiation Protection Centre, within the framework of monitoring of radioactivity of environment carried out measurement of specific activity of 90 Sr content in selected food and water samples. The paper described the methods of measurement and radiochemical separation. Presented results, as average values of specific activity of 90 Sr, were used for estimation of effective dose due to ingestion of 90 Sr for 2007. and 2008. Estimated effective dose for 2007. due to ingestion of 90 Sr for adults was 1,36 μSv and 2,03 μSv for children (10 year old), and for 2008. 0,67 μSv (adults) and 1,01 μSv (children 10 year old). Estimated effective doses for 2007. and 2008. are varied because of different average specific activity radionuclide 90 Sr in selected samples of food, their number, species and origin. (author) [sr
System and method for traffic signal timing estimation
Dumazert, Julien; Claudel, Christian G.
2015-01-01
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
Estimation of absorbed dose in cell nuclei due to DNA-bound /sup 3/H
Energy Technology Data Exchange (ETDEWEB)
Saito, M; Ishida, M R; Streffer, C; Molls, M
1985-04-01
The average absorbed dose due to DNA-bound /sup 3/H in a cell nucleus was estimated by a Monte Carlo simulation for a model nucleus which was assumed to be spheroidal. The volume of the cell nucleus was the major dose-determining factor for cell nuclei which have the same DNA content and the same specific activity of DNA. This result was applied to estimating the accumulated dose in the cell nuclei of organs of young mice born from mother mice which ingested /sup 3/H-thymidine with drinking water during pregnancy. The values of dose-modifying factors for the accumulated dose due to DNA-bound /sup 3/H compared to the dose due to an assumed homogenous distribution of /sup 3/H in organ were found to be between about 2 and 6 for the various organs.
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Accurate light-time correction due to a gravitating mass
Energy Technology Data Exchange (ETDEWEB)
Ashby, Neil [Department of Physics, University of Colorado, Boulder, CO (United States); Bertotti, Bruno, E-mail: ashby@boulder.nist.go [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia (Italy)
2010-07-21
This technical paper of mathematical physics arose as an aftermath of the 2002 Cassini experiment (Bertotti et al 2003 Nature 425 374-6), in which the PPN parameter {gamma} was measured with an accuracy {sigma}{sub {gamma}} = 2.3 x 10{sup -5} and found consistent with the prediction {gamma} = 1 of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression (8) for the gravitational delay {Delta}t that differs from the standard formula (2); this difference is of second order in powers of m-the gravitational radius of the Sun-but in Cassini's case it was much larger than the expected order of magnitude m{sup 2}/b, where b is the distance of the closest approach of the ray. Since the ODP does not take into account any other second-order terms, it is necessary, also in view of future more accurate experiments, to revisit the whole problem, to systematically evaluate higher order corrections and to determine which terms, and why, are larger than the expected value. We note that light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat's action functional at its minimum is just the light-time between the two end points A and B. A new and powerful formulation is thus obtained. This method is closely connected with the much more general approach of Le Poncin-Lafitte et al (2004 Class. Quantum Grav. 21 4463-83), which is based on Synge's world function. Asymptotic power series are necessary to provide a safe and automatic way of selecting which terms to keep at each order. Higher order approximations to the required quantities, in particular the delay and the deflection, are easily obtained. We also show that in a close superior conjunction, when b is much smaller than the distances of A and B from the Sun, say of order R, the second-order correction has an enhanced part of order m{sup 2}R/b{sup 2}, which
Multidimensional scaling of musical time estimations.
Cocenas-Silva, Raquel; Bueno, José Lino Oliveira; Molin, Paul; Bigand, Emmanuel
2011-06-01
The aim of this study was to identify the psycho-musical factors that govern time evaluation in Western music from baroque, classic, romantic, and modern repertoires. The excerpts were previously found to represent variability in musical properties and to induce four main categories of emotions. 48 participants (musicians and nonmusicians) freely listened to 16 musical excerpts (lasting 20 sec. each) and grouped those that seemed to have the same duration. Then, participants associated each group of excerpts to one of a set of sine wave tones varying in duration from 16 to 24 sec. Multidimensional scaling analysis generated a two-dimensional solution for these time judgments. Musical excerpts with high arousal produced an overestimation of time, and affective valence had little influence on time perception. The duration was also overestimated when tempo and loudness were higher, and to a lesser extent, timbre density. In contrast, musical tension had little influence.
Freeway travel-time estimation and forecasting.
2012-09-01
This project presents a microsimulation-based framework for generating short-term forecasts of travel time on freeway corridors. The microsimulation model that is developed (GTsim), replicates freeway capacity drop and relaxation phenomena critical f...
NOTE ON TRAVEL TIME SHIFTS DUE TO AMPLITUDE MODULATION IN TIME-DISTANCE HELIOSEISMOLOGY MEASUREMENTS
International Nuclear Information System (INIS)
Nigam, R.; Kosovichev, A. G.
2010-01-01
Correct interpretation of acoustic travel times measured by time-distance helioseismology is essential to get an accurate understanding of the solar properties that are inferred from them. It has long been observed that sunspots suppress p-mode amplitude, but its implications on travel times have not been fully investigated so far. It has been found in test measurements using a 'masking' procedure, in which the solar Doppler signal in a localized quiet region of the Sun is artificially suppressed by a spatial function, and using numerical simulations that the amplitude modulations in combination with the phase-speed filtering may cause systematic shifts of acoustic travel times. To understand the properties of this procedure, we derive an analytical expression for the cross-covariance of a signal that has been modulated locally by a spatial function that has azimuthal symmetry and then filtered by a phase-speed filter typically used in time-distance helioseismology. Comparing this expression to the Gabor wavelet fitting formula without this effect, we find that there is a shift in the travel times that is introduced by the amplitude modulation. The analytical model presented in this paper can be useful also for interpretation of travel time measurements for the non-uniform distribution of oscillation amplitude due to observational effects.
Estimation of the collective dose in the Portuguese population due to medical procedures in 2010
International Nuclear Information System (INIS)
Teles, Pedro; Vaz, Pedro; Sousa, M. Carmen de; Paulo, Graciano; Santos, Joana; Pascoal, Ana; Cardoso, Gabriela; Santos, Ana Isabel; Lanca, Isabel; Matela, Nuno; Janeiro, Luis; Sousa, Patrick; Carvoeiras, Pedro; Parafita, Rui; Simaozinho, Paula
2013-01-01
In a wide range of medical fields, technological advancements have led to an increase in the average collective dose in national populations worldwide. Periodic estimations of the average collective population dose due to medical exposure is, therefore of utmost importance, and is now mandatory in countries within the European Union (article 12 of EURATOM directive 97/ 43). Presented in this work is a report on the estimation of the collective dose in the Portuguese population due to nuclear medicine diagnostic procedures and the Top 20 diagnostic radiology examinations, which represent the 20 exams that contribute the most to the total collective dose in diagnostic radiology and interventional procedures in Europe. This work involved the collaboration of a multidisciplinary taskforce comprising representatives of all major Portuguese stakeholders (universities, research institutions, public and private health care providers, administrative services of the National Healthcare System, scientific and professional associations and private service providers). This allowed us to gather a comprehensive amount of data necessary for a robust estimation of the collective effective dose to the Portuguese population. The methodology used for data collection and dose estimation was based on European Commission recommendations, as this work was performed in the framework of the European wide Dose Datamed II project. This is the first study estimating the collective dose for the population in Portugal, considering such a wide national coverage and range of procedures and consisting of important baseline reference data. The taskforce intends to continue developing periodic collective dose estimations in the future. The estimated annual average effective dose for the Portuguese population was of 0.080±0.017 mSv caput -1 for nuclear medicine exams and of 0.96±0.68 mSv caput -1 for the Top 20 diagnostic radiology exams. (authors)
The clock that times us : Electromagnetic signatures of time estimation
Kononowicz, Tadeusz Władysław
2015-01-01
As time is a fundamental dimension of our existence, perceiving the flow of time is an ubiquitous experience of our everyday life. This so-called sense of time is utilized in our everyday activities, for example, when we expect some events to happen, but it also prevents us from taking a morning
Algorithms for Brownian first-passage-time estimation
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
Freeway travel-time estimation and forecasting.
2013-03-01
Real-time traffic information provided by GDOT has proven invaluable for commuters in the : Georgia freeway network. The increasing number of Variable Message Signs, addition of : services such as My-NaviGAtor, NaviGAtor-to-go etc. and the advancemen...
A method for the estimation of the probability of damage due to earthquakes
International Nuclear Information System (INIS)
Alderson, M.A.H.G.
1979-07-01
The available information on seismicity within the United Kingdom has been combined with building damage data from the United States to produce a method of estimating the probability of damage to structures due to the occurrence of earthquakes. The analysis has been based on the use of site intensity as the major damage producing parameter. Data for structural, pipework and equipment items have been assumed and the overall probability of damage calculated as a function of the design level. Due account is taken of the uncertainties of the seismic data. (author)
Investigation of metabolites for estimating blood deposition time.
Lech, Karolina; Liu, Fan; Davies, Sarah K; Ackermann, Katrin; Ang, Joo Ern; Middleton, Benita; Revell, Victoria L; Raynaud, Florence J; Hoveijn, Igor; Hut, Roelof A; Skene, Debra J; Kayser, Manfred
2018-01-01
Trace deposition timing reflects a novel concept in forensic molecular biology involving the use of rhythmic biomarkers for estimating the time within a 24-h day/night cycle a human biological sample was left at the crime scene, which in principle allows verifying a sample donor's alibi. Previously, we introduced two circadian hormones for trace deposition timing and recently demonstrated that messenger RNA (mRNA) biomarkers significantly improve time prediction accuracy. Here, we investigate the suitability of metabolites measured using a targeted metabolomics approach, for trace deposition timing. Analysis of 171 plasma metabolites collected around the clock at 2-h intervals for 36 h from 12 male participants under controlled laboratory conditions identified 56 metabolites showing statistically significant oscillations, with peak times falling into three day/night time categories: morning/noon, afternoon/evening and night/early morning. Time prediction modelling identified 10 independently contributing metabolite biomarkers, which together achieved prediction accuracies expressed as AUC of 0.81, 0.86 and 0.90 for these three time categories respectively. Combining metabolites with previously established hormone and mRNA biomarkers in time prediction modelling resulted in an improved prediction accuracy reaching AUCs of 0.85, 0.89 and 0.96 respectively. The additional impact of metabolite biomarkers, however, was rather minor as the previously established model with melatonin, cortisol and three mRNA biomarkers achieved AUC values of 0.88, 0.88 and 0.95 for the same three time categories respectively. Nevertheless, the selected metabolites could become practically useful in scenarios where RNA marker information is unavailable such as due to RNA degradation. This is the first metabolomics study investigating circulating metabolites for trace deposition timing, and more work is needed to fully establish their usefulness for this forensic purpose.
International Nuclear Information System (INIS)
Takemura, T.; Taniguchi, T.
2004-01-01
The purpose of this paper is to offer a new method for detecting stress in wood due to moisture along the lines of a theory reported previously. According to the theory, the stress in wood could be estimated from the moisture content of the wood and the power voltage of a microwave moisture meter (i.e., attenuation of the projected microwave). This seems to suggest a possibility of utilizing microwaves in the field of stress detection. To develop such an idea, the stress formulas were initially modified to the form of an uni-variable function of power voltage, and the application method of the formulas to detection was tried. Finally, these results were applied to the data of sugi (Cryptomeria japonica) lumber in the previous experiment. The estimated strains showed fairly good agreement with those observed. It could be concluded from this study that the proposed method might be available for detecting stress in wood due to moisture
Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers
Selker, John S.
2017-06-01
Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.
Comparative study for the estimation of To shift due to irradiation embrittlement
International Nuclear Information System (INIS)
Lee, Jin Ho; Park, Youn won; Choi, Young Hwan; Kim, Seok Hun; Revka, Volodymyr
2002-01-01
Recently, an approach called the 'Master Curve' method was proposed which has opened a new means to acquire a directly measured material-specific fracture toughness curve. For the entire application of the Master Curve method, several technical issues should be solved. One of them is to utilize existing Charpy impact test data in the evaluation of a fracture transition temperature shift due to irradiation damage. In the U.S. and most Western countries, the Charpy impact test data have been used to estimate the irradiation effects on fracture toughness changes of RPV materials. For the determination of the irradiation shift the indexing energy level of 41 joule is used irrespective of the material yield strength. The Russian Code also requires the Charpy impact test data to determine the extent of radiation embrittlement. Unlike the U.S. Code, however, the Russian approach uses the indexing energy level varying according to the material strength. The objective of this study is to determine a method by which the reference transition temperature shift (ΔT o ) due to irradiation can be estimated. By comparing the irradiation shift estimated according to the U.S. procedure (ΔT 41J ) with that estimated according to the Russian procedure (ΔT F ), it was found that one-to-one relation exists between ΔT o and ΔT F
Estimating bus passenger waiting times from incomplete bus arrivals data
McLeod, F.N.
2007-01-01
This paper considers the problem of estimating bus passenger waiting times at bus stops using incomplete bus arrivals data. This is of importance to bus operators and regulators as passenger waiting time is a key performance measure. Average waiting times are usually estimated from bus headways, that is, time gaps between buses. It is both time-consuming and expensive to measure bus arrival times manually so methods using automatic vehicle location systems are attractive; however, these syste...
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...
Estimation of train dwell time at short stops based on track occupation event data
Li, D.; Daamen, W.; Goverde, R.M.P.
2015-01-01
Train dwell time is one of the most unpredictable components of railway operations mainly due to the varying volumes of alighting and boarding passengers. For reliable estimations of train running times and route conflicts on main lines it is however necessary to obtain accurate estimations of dwell
Dose estimation from food intake due to the Fukushima Daiichi nuclear power plant accident
International Nuclear Information System (INIS)
Yamaguchi, Ichiro; Terada, Hiroshi; Kunugita, Naoki; Takahashi, Kunihiko
2013-01-01
Since the Fukushima Daiichi nuclear power plant accident, concerns have arisen about the radiation safety of food raised at home and abroad. Therefore, many measures have been taken to address this. To evaluate the effectiveness of these measures, dose estimation due to food consumption has been attempted by various methods. In this paper, we show the results of dose estimation based on the monitoring data of radioactive materials in food published by the Ministry of Health, Labour and Welfare. The Radioactive Material Response Working Group in the Food Sanitation Subcommittee of the Pharmaceutical Affairs and Food Sanitation Council reported such dose estimation results on October 31, 2011 using monitoring data from immediately after the accident through September, 2011. Our results presented in this paper were the effective dose and thyroid equivalent dose integrated up to December 2012 from immediately after the accident. The estimated results of committed effective dose by age group derived from the radioiodine and radiocesium in food after the Fukushima Daiichi nuclear power plant accident showed the highest median value (0.19 mSv) in children 13-18 years of age. The highest 95% tile value, 0.33 mSv, was shown in the 1-6 years age range. These dose estimations from food can be useful for evaluation of radiation risk for individuals or populations and for radiation protection measures. It would also be helpful for the study of risk management of food in the future. (author)
Estimated value of insurance premium due to Citarum River flood by using Bayesian method
Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.
2018-03-01
Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.
Inferring Saving in Training Time From Effect Size Estimates
National Research Council Canada - National Science Library
Burright, Burke
2000-01-01
.... Students' time saving represents a major potential benefit of using them. This paper fills a methodology gap in estimating the students' timesaving benefit of asynchronous training technologies...
Real Time Seismic Loss Estimation in Italy
Goretti, A.; Sabetta, F.
2009-04-01
By more than 15 years the Seismic Risk Office is able to perform a real-time evaluation of the earthquake potential loss in any part of Italy. Once the epicentre and the magnitude of the earthquake are made available by the National Institute for Geophysiscs and Volca-nology, the model, based on the Italian Geographic Information Sys-tems, is able to evaluate the extent of the damaged area and the consequences on the built environment. In recent years the model has been significantly improved with new methodologies able to conditioning the uncertainties using observa-tions coming from the fields during the first days after the event. However it is reputed that the main challenges in loss analysis are related to the input data, more than to methodologies. Unlike the ur-ban scenario, where the missing data can be collected with enough accuracy, the country-wise analysis requires the use of existing data bases, often collected for other purposed than seismic scenario evaluation, and hence in some way lacking of completeness and homogeneity. Soil properties, building inventory and population dis-tribution are the main input data that are to be known in any site of the whole Italian territory. To this end the National Census on Popu-lation and Dwellings has provided information on the residential building types and the population that lives in that building types. The critical buildings, such as Hospital, Fire Brigade Stations, Schools, are not included in the inventory, since the national plan for seismic risk assessment of critical buildings is still under way. The choice of a proper soil motion parameter, its attenuation with distance and the building type fragility are important ingredients of the model as well. The presentation will focus on the above mentioned issues, highlight-ing the different data sets used and their accuracy, and comparing the model, input data and results when geographical areas with dif-ferent extent are considered: from the urban scenarios
DUE GlobBiomass - Estimates of Biomass on a Global Scale
Eberle, J.; Schmullius, C.
2017-12-01
For the last three years, a new ESA Data User Element (DUE) project had focussed on creating improved knowledge about the Essential Climate Variable Biomass. The main purpose of the DUE GlobBiomass project is to better characterize and to reduce uncertainties of AGB estimates by developing an innovative synergistic mapping approach in five regional sites (Sweden, Poland, Mexico, Kalimantan, South Africa) for the epochs 2005, 2010 and 2015 and for one global map for the year 2010. The project team includes leading Earth Observation experts of Europe and is linked through Partnership Agreements with further national bodies from Brazil, Canada, China, Russia and South Africa. GlobBiomass has demonstrated how EO observation data can be integrated with in situ measurements and ecological understanding to provide improved biomass estimates that can be effectively exploited by users. The target users had mainly be drawn from the climate and carbon cycle modelling communities and included users concerned with carbon emissions and uptake due to biomass changes within initiatives such as REDD+. GlobBiomass provided a harmonised structure that can be exploited to address user needs for biomass information, but will be capable of being progressively refined as new data and methods become available. This presentation will give an overview of the technical prerequisites and final results of the GlobBiomass project.
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
International Nuclear Information System (INIS)
Partridge, J.E.; Horton, T.R.; Sensintaffar, E.L.; Boysen, G.A.
1978-06-01
The EPA Office of Radiation Programs has conducted a series of studies to determine the radiological impact of the phosphate mining and milling industry. This report describes the efforts to estimate the radiation doses due to airborne emissions of particulates from selected phosphate milling operations in Florida. Two wet process phosphoric acid plants and one ore drying facility were selected for this study. The 1976 Annual Operations/Emissions Report, submitted by each facility to the Florida Department of Environmental Regulation, and a field survey trip by EPA personnel to each facility were used to develop data for dose calculations. The field survey trip included sampling for stack emissions and ambient air samples collected in the general vicinity of each plant. Population and individual radiation dose estimates are made based on these sources of data
Years of life gained due to leisure-time physical activity in the U.S.
Janssen, Ian; Carson, Valerie; Lee, I-Min; Katzmarzyk, Peter T; Blair, Steven N
2013-01-01
Physical inactivity is an important modifiable risk factor for noncommunicable disease. The degree to which physical activity affects the life expectancy of Americans is unknown. This study estimated the potential years of life gained due to leisure-time physical activity in the U.S. Data from the National Health and Nutrition Examination Survey (2007-2010); National Health Interview Study mortality linkage (1990-2006); and U.S. Life Tables (2006) were used to estimate and compare life expectancy at each age of adult life for inactive (no moderate to vigorous physical activity); somewhat-active (some moderate to vigorous activity but active (≥ 500 MET minutes/week of moderate to vigorous activity) adults. Analyses were conducted in 2012. Somewhat-active and active non-Hispanic white men had a life expectancy at age 20 years that was ~2.4 years longer than that for the inactive men; this life expectancy advantage was 1.2 years at age 80 years. Similar observations were made in non-Hispanic white women, with a higher life expectancy within the active category of 3.0 years at age 20 years and 1.6 years at age 80 years. In non-Hispanic black women, as many as 5.5 potential years of life were gained due to physical activity. Significant increases in longevity were also observed within somewhat-active and active non-Hispanic black men; however, among Hispanics the years-of-life-gained estimates were not significantly different from 0 years gained. Leisure-time physical activity is associated with increases in longevity. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Years of Life Gained Due to Leisure-Time Physical Activity in the United States
Janssen, Ian; Carson, Valerie; Lee, I-Min; Katzmarzyk, Peter T.; Blair, Steven N.
2013-01-01
Background Physical inactivity is an important modifiable risk factor for non-communicable disease. The degree to which physical activity affects the life expectancy of Americans is unknown. This study estimated the potential years of life gained due to leisure-time physical activity across the adult lifespan in the United States. Methods Data from the National Health and Nutrition Examination Survey (2007–2010), National Health Interview Study mortality linkage (1990–2006), and US Life Tables (2006) were used to estimate and compare life expectancy at each age of adult life for inactive (no moderate-to-vigorous physical activity), somewhat active (some moderate-to-vigorous activity but active (≥500 metabolic equivalent min/week of moderate-to-vigorous activity) adults. Analyses were conducted in 2012. Results Somewhat active and active non-Hispanic white men had a life expectancy at age 20 that was around 2.4 years longer than the inactive men; this life expectancy advantage was 1.2 years at age 80. Similar observations were made in non-Hispanic white women, with a higher life expectancy within the active category of 3.0 years at age 20 and 1.6 years at age 80. In non-Hispanic black women, as many as 5.5 potential years of life were gained due to physical activity. Significant increases in longevity were also observed within somewhat active and active non-Hispanic black men; however, among Hispanics the years of life gained estimates were more variable and not significantly different from 0 years gained. Conclusions Leisure-time physical activity is associated with increases in longevity in the United States. PMID:23253646
Estimation of organ and effective dose due to Compton backscatter security scans
International Nuclear Information System (INIS)
Hoppe, Michael E.; Schmidt, Taly Gilat
2012-01-01
Purpose: To estimate organ and effective radiation doses due to backscatter security scanners using Monte Carlo simulations and a voxelized phantom set. Methods: Voxelized phantoms of male and female adults and children were used with the GEANT4 toolkit to simulate a backscatter security scan. The backscatter system was modeled based on specifications available in the literature. The simulations modeled a 50 kVp spectrum with 1.0 mm-aluminum-equivalent filtration and a previously measured exposure of approximately 4.6 μR at 30 cm from the source. Photons and secondary interactions were tracked from the source until they reached zero kinetic energy or exited from the simulation’s boundaries. The energy deposited in the phantoms’ respective organs was tallied and used to calculate total organ dose and total effective dose for frontal, rear, and full scans with subjects located 30 and 75 cm from the source. Results: For a full screen, all phantoms’ total effective doses were below the established 0.25 μSv standard, with an estimated maximum total effective dose of 0.07 μSv for full screen of a male child. The estimated maximum organ dose due to a full screen was 1.03 μGy, deposited in the adipose tissue of the male child phantom when located 30 cm from the source. All organ dose estimates had a coefficient of variation of less than 3% for a frontal scan and less than 11% for a rear scan. Conclusions: Backscatter security scanners deposit dose in organs beyond the skin. The effective dose is below recommended standards set by the Health Physics Society (HPS) and the American National Standards Institute (ANSI) assuming the system provides a maximum exposure of approximately 4.6 μR at 30 cm.
Al-Jumaily, Ahmed; Chen, Leizhi
2012-10-07
This paper presents a novel approach to estimate stiffness changes in airway smooth muscles due to external oscillation. Artificial neural networks are used to model the stiffness changes due to cyclic stretches of the smooth muscles. The nonlinear relationship between stiffness ratios and oscillation frequencies is modeled by a feed-forward neural network (FNN) model. The structure of the FNN is selected through the training and validation using literature data from 11 experiments with different muscle lengths, muscle masses, oscillation frequencies and amplitudes. Data pre-processing methods are used to improve the robustness of the neural network model to match the non-linearity. The validation results show that the FNN model can predict the stiffness ratio changes with a mean square error of 0.0042. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nonparametric volatility density estimation for discrete time models
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2005-01-01
We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed
Challenges in automated estimation of capillary refill time in dogs
Cugmas, Blaž; Spigulis, Janis
2018-02-01
Capillary refill time (CRT) is a part of the cardiorespiratory examination in dogs. Changes in CRT can reflect pathological conditions like shock or anemia. Visual CRT estimation has low repeatability; therefore, optical systems for automated estimation have recently appeared. Since existing systems are unsuitable for use in dogs, we designed a simple, small and portable device, which could be easily used at veterinary clinic. The device was preliminarily tested on several measurement sites in two dogs. Not all measurement sites were suitable for CRT measurements due to underlying tissue optical and mechanical properties. The CRT measurements were possible on the labial mucosa, above the sternum and on the digit where CRT was in the range of values, retrieved from the color video of the visual CRT measurement. It seems that light penetration predominantly governs tissue optical response when the pressure is applied. Therefore, it is important to select a proper light, which reaches only superficial capillaries and does not penetrate deeper. Blue or green light is probably suitable for light skin or mucosa, on the other hand, red or near-infrared might be used for skin with pigmented or thick epidermis. Additionally, further improvements of the device design are considered, like adding a calibrated spring, which would insure application of consistent pressure.
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Estimation of integrity of cast-iron cask against impact due to free drop test, (1)
International Nuclear Information System (INIS)
Itoh, Chihiro
1988-01-01
Ductile cast iron is examined to use for shipping and storage cask from a economic point of view. However, ductile cast iron is considered to be a brittle material in general. Therefore, it is very important to estimate the integrity of cast iron cask against brittle failure due to impact load at 9 m drop test and 1 m derop test on to pin. So, the F.E.M. analysis which takes nonlinearity of materials into account and the estimation against brittle failure by the method which is proposed in this report were carried out. From the analysis, it is made clear that critical flaw depth (the minimum depth to initiate the brittle failure) is 21.1 mm and 13.1 mm in the case of 9 m drop test and 1 m drop test on to pin respectively. These flaw depth can be detected by ultrasonic test. Then, the cask is assured against brittle failure due to impact load at 9 m drop test and 1 m drop test on to pin. (author)
GLODEP2: a computer model for estimating gamma dose due to worldwide fallout of radioactive debris
International Nuclear Information System (INIS)
Edwards, L.L.; Harvey, T.F.; Peterson, K.R.
1984-03-01
The GLODEP2 computer code provides estimates of the surface deposition of worldwide radioactivity and the gamma-ray dose to man from intermediate and long-term fallout. The code is based on empirical models derived primarily from injection-deposition experience gained from the US and USSR nuclear tests in 1958. Under the assumption that a nuclear power facility is destroyed and that its debris behaves in the same manner as the radioactive cloud produced by the nuclear weapon that attached the facility, predictions are made for the gamma does from this source of radioactivity. As a comparison study the gamma dose due to the atmospheric nuclear tests from the period of 1951 to 1962 has been computed. The computed and measured values from Grove, UK and Chiba, Japan agree to within a few percent. The global deposition of radioactivity and resultant gamma dose from a hypothetical strategic nuclear exchange between the US and the USSR is reported. Of the assumed 5300 Mton in the exchange, 2031 Mton of radioactive debris is injected in the atmosphere. The highest estimated average whole body total integrated dose over 50 years (assuming no reduction by sheltering or weathering) is 23 rem in the 30 to 50 degree latitude band. If the attack included a 100 GW(e) nuclear power industry as targets in the US, this dose is increased to 84.6 rem. Hotspots due to rainfall could increase these values by factors of 10 to 50
Mathur, P K; Herrero-Medrano, J M; Alexandri, P; Knol, E F; ten Napel, J; Rashidi, H; Mulder, H A
2014-12-01
A method was developed and tested to estimate challenge load due to disease outbreaks and other challenges in sows using reproduction records. The method was based on reproduction records from a farm with known disease outbreaks. It was assumed that the reduction in weekly reproductive output within a farm is proportional to the magnitude of the challenge. As the challenge increases beyond certain threshold, it is manifested as an outbreak. The reproduction records were divided into 3 datasets. The first dataset called the Training dataset consisted of 57,135 reproduction records from 10,901 sows from 1 farm in Canada with several outbreaks of porcine reproductive and respiratory syndrome (PRRS). The known disease status of sows was regressed on the traits number born alive, number of losses as a combination of still birth and mummified piglets, and number of weaned piglets. The regression coefficients from this analysis were then used as weighting factors for derivation of an index measure called challenge load indicator. These weighting factors were derived with i) a two-step approach using residuals or year-week solutions estimated from a previous step, and ii) a single-step approach using the trait values directly. Two types of models were used for each approach: a logistic regression model and a general additive model. The estimates of challenge load indicator were then compared based on their ability to detect PRRS outbreaks in a Test dataset consisting of records from 65,826 sows from 15 farms in the Netherlands. These farms differed from the Canadian farm with respect to PRRS virus strains, severity and frequency of outbreaks. The single-step approach using a general additive model was best and detected 14 out of the 15 outbreaks. This approach was then further validated using the third dataset consisting of reproduction records of 831,855 sows in 431 farms located in different countries in Europe and America. A total of 41 out of 48 outbreaks detected
Fuzzy logic estimator of rotor time constant in induction motors
Energy Technology Data Exchange (ETDEWEB)
Alminoja, J. [Tampere University of Technology (Finland). Control Engineering Laboratory; Koivo, H. [Helsinki University of Technology, Otaniemi (Finland). Control Engineering Laboratory
1997-12-31
Vector control of AC machines is a well-known and widely used technique in induction machine control. It offers an exact method for speed control of induction motors, but it is also sensitive to the changes in machine parameters. E.g. rotor time constant has a strong dependence on temperature. In this paper a fuzzy logic estimator is developed, with which the rotor time constant can be estimated when the machine has a load. It is more simple than the estimators proposed in the literature. The fuzzy estimator is tested by simulation when step-wise abrupt changes and slow drifting occurs. (orig.) 7 refs.
Improvement of radiation dose estimation due to nuclear accidents using deep neural network and GPU
Energy Technology Data Exchange (ETDEWEB)
Desterro, Filipe S.M.; Almeida, Adino A.H.; Pereira, Claudio M.N.A., E-mail: filipesantana18@gmail.com, E-mail: adino@ien.gov.br, E-mail: cmcoelho@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2017-07-01
Recently, the use of mobile devices has been proposed for dose assessment during nuclear accidents. The idea is to support field teams, providing an approximated estimation of the dose distribution map in the vicinity of the nuclear power plant (NPP), without needing to be connected to the NPP systems. In order to provide such stand-alone execution, the use of artificial neural networks (ANN) has been proposed in substitution of the complex and time consuming physical models executed by the atmospheric dispersion radionuclide (ADR) system. One limitation observed on such approach is the very time-consuming training of the ANNs. Moreover, if the number of input parameters increases the performance of standard ANNs, like Multilayer-Perceptron (MLP) with backpropagation training, is affected leading to unreasonable training time. To improve learning, allowing better dose estimations, more complex ANN architectures are required. ANNs with many layers (much more than a typical number of layers), referred to as Deep Neural Networks (DNN), for example, have demonstrating to achieve better results. On the other hand, the training of such ANNs is very much slow. In order to allow the use of such DNNs in a reasonable training time, a parallel programming solution, using Graphic Processing Units (GPU) and Computing Unified Device Architecture (CUDA) is proposed. This work focuses on the study of computational technologies for improvement of the ANNs to be used in the mobile application, as well as their training algorithms. (author)
Improvement of radiation dose estimation due to nuclear accidents using deep neural network and GPU
International Nuclear Information System (INIS)
Desterro, Filipe S.M.; Almeida, Adino A.H.; Pereira, Claudio M.N.A.
2017-01-01
Recently, the use of mobile devices has been proposed for dose assessment during nuclear accidents. The idea is to support field teams, providing an approximated estimation of the dose distribution map in the vicinity of the nuclear power plant (NPP), without needing to be connected to the NPP systems. In order to provide such stand-alone execution, the use of artificial neural networks (ANN) has been proposed in substitution of the complex and time consuming physical models executed by the atmospheric dispersion radionuclide (ADR) system. One limitation observed on such approach is the very time-consuming training of the ANNs. Moreover, if the number of input parameters increases the performance of standard ANNs, like Multilayer-Perceptron (MLP) with backpropagation training, is affected leading to unreasonable training time. To improve learning, allowing better dose estimations, more complex ANN architectures are required. ANNs with many layers (much more than a typical number of layers), referred to as Deep Neural Networks (DNN), for example, have demonstrating to achieve better results. On the other hand, the training of such ANNs is very much slow. In order to allow the use of such DNNs in a reasonable training time, a parallel programming solution, using Graphic Processing Units (GPU) and Computing Unified Device Architecture (CUDA) is proposed. This work focuses on the study of computational technologies for improvement of the ANNs to be used in the mobile application, as well as their training algorithms. (author)
Empirical Study of Travel Time Estimation and Reliability
Li, Ruimin; Chai, Huajun; Tang, Jin
2013-01-01
This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...
Mode choice endogeneity in value of travel time estimation
DEFF Research Database (Denmark)
Mabit, Stefan Lindhard; Fosgerau, Mogens
The current way to estimate value of travel time is to use a mode-specific sample and hence to estimate mode-specific value of travel times. This approach raises certain questions concerning how to generalise the values to a population. A problem would be if there is an uncontrolled sample...... selection mechanism. This is the case if there is correlation between mode choice and the value of travel time that is not controlled for by explanatory variables. What could confuse the estimated values is the difficulty to separate mode effects from user effect. An example would be the effect of income...... of travel time we use a stated choice dataset. These data include binary choice within mode for car and bus. The first approach is to use a probit model to model mode choice using instruments and then use this in the estimation of the value of travel time. The second approach is based on the use of a very...
Optical losses due to tracking error estimation for a low concentrating solar collector
International Nuclear Information System (INIS)
Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón
2015-01-01
Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water
Real-Time Head Pose Estimation on Mobile Platforms
Directory of Open Access Journals (Sweden)
Jianfeng Ren
2010-06-01
Full Text Available Many computer vision applications such as augmented reality require head pose estimation. As far as the real-time implementation of head pose estimation on relatively resource limited mobile platforms is concerned, it is required to satisfy real-time constraints while maintaining reasonable head pose estimation accuracy. The introduced head pose estimation approach in this paper is an attempt to meet this objective. The approach consists of the following components: Viola-Jones face detection, color-based face tracking using an online calibration procedure, and head pose estimation using Hu moment features and Fisher linear discriminant. Experimental results running on an actual mobile device are reported exhibiting both the real- time and accuracy aspects of the developed approach.
Estimating a population cumulative incidence under calendar time trends
DEFF Research Database (Denmark)
Hansen, Stefan N; Overgaard, Morten; Andersen, Per K
2017-01-01
BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date...... by calendar time trends, the total sample Kaplan-Meier and Aalen-Johansen estimators do not provide useful estimates of the general risk in the target population. We present some alternatives to this type of analysis. RESULTS: We show how a proportional hazards model may be used to extrapolate disease risk...... estimates if proportionality is a reasonable assumption. If not reasonable, we instead advocate that a more useful description of the disease risk lies in the age-specific cumulative incidence curves across strata given by time of entry or perhaps just the end of follow-up estimates across all strata...
International Nuclear Information System (INIS)
Oliveira, G.M. de; Leitao, M. de M.V.B.R.
2000-01-01
The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt
A general approach for the estimation of loss of life due to natural and technological disasters
International Nuclear Information System (INIS)
Jonkman, S.N.; Lentz, A.; Vrijling, J.K.
2010-01-01
In assessing the safety of engineering systems in the context of quantitative risk analysis one of the most important consequence types concerns the loss of life due to accidents and disasters. In this paper, a general approach for loss of life estimation is proposed which includes three elements: (1) the assessment of physical effects associated with the event; (2) determination of the number of exposed persons (taking into account warning and evacuation); and (3) determination of mortality amongst the population exposed. The typical characteristics of and modelling approaches for these three elements are discussed. This paper focuses on 'small probability-large consequences' events within the engineering domain. It is demonstrated how the proposed approach can be applied to various case studies, such as tunnel fires, earthquakes and flood events.
Energy Technology Data Exchange (ETDEWEB)
Pineda Porras, Omar Andrey [Los Alamos National Laboratory
2009-01-01
Over the past three decades, seismic fragility fonnulations for buried pipeline systems have been developed following two tendencies: the use of earthquake damage scenarios from several pipeline systems to create general pipeline fragility functions; and, the use of damage scenarios from one pipeline system to create specific-system fragility functions. In this paper, the advantages and disadvantages of both tendencies are analyzed and discussed; in addition, a summary of what can be considered the new challenges for developing better pipeline seismic fragility formulations is discussed. The most important conclusion of this paper states that more efforts are needed to improve the estimation of transient ground strain -the main cause of pipeline damage due to seismic wave propagation; with relevant advances in that research field, new and better fragility formulations could be developed.
Estimating Time-to-Collision with Retinitis Pigmentosa
Jones, Tim
2006-01-01
This article reports on the ability of observers who are sighted and those with low vision to make time-to-collision (TTC) estimations using video. The TTC estimations made by the observers with low vision were comparable to those made by the sighted observers, and both groups made underestimation errors that were similar to those that were…
Time Skew Estimator for Dual-Polarization QAM Transmitters
DEFF Research Database (Denmark)
Medeiros Diniz, Júlio César; Da Ros, Francesco; Jones, Rasmus Thomas
2017-01-01
A simple method for joint estimation of transmitter’s in-phase/quadrature and inter-polarization time skew is proposed and experimentally demonstrated. The method is based on clock tone extraction of a photodetected signal and genetic algorithm. The maximum estimation error was 0.5 ps....
Estimation of committed effective dose due to tritium in ground water in some places of Maharashtra
International Nuclear Information System (INIS)
Reddy, P.J.; Bhade, S.P.D.; Kolekar, R.V.; Singh, Rajvir; Pradeepkumar, K.S.
2014-01-01
In the present study Tritium concentration in well and bore well water samples were analyzed for the samples collected from the villages of Pune, Kolhapur and Ratnagiri. The activity concentration ranged from 0.55 - 3.66 Bq L -1 . The associated age-dependant dose from water ingestion in the study area was estimated. The effective committed dose recorded for different age classes is negligible compared to World Health Organization and U.S. Environmental Protection Agency dose guidelines. The Minimum Detectable Activity achieved was 1.5 Bq L -1 for a total counting time of 500 minutes. (author)
Estimating spatial travel times using automatic vehicle identification data
2001-01-01
Prepared ca. 2001. The paper describes an algorithm that was developed for estimating reliable and accurate average roadway link travel times using Automatic Vehicle Identification (AVI) data. The algorithm presented is unique in two aspects. First, ...
Directory of Open Access Journals (Sweden)
R. Moratiel
2013-06-01
Full Text Available In agricultural ecosystems the use of evapotranspiration (ET to improve irrigation water management is generally widespread. Commonly, the crop ET (ETc is estimated by multiplying the reference crop evapotranspiration (ETo by a crop coefficient (Kc. Accurate estimation of ETo is critical because it is the main factor affecting the calculation of crop water use and water management. The ETo is generally estimated from recorded meteorological variables at reference weather stations. The main objective of this paper was assessing the effect of the uncertainty due to random noise in the sensors used for measurement of meteorological variables on the estimation of ETo, crop ET and net irrigation requirements of grain corn and alfalfa in three irrigation districts of the middle Ebro River basin. Five scenarios were simulated, four of them individually considering each recorded meteorological variable (temperature, relative humidity, solar radiation and wind speed and a fifth scenario combining together the uncertainty of all sensors. The uncertainty in relative humidity for irrigation districts Riegos del Alto Aragón (RAA and Bardenas (BAR, and temperature for irrigation district Canal de Aragón y Cataluña (CAC, were the two most important factors affecting the estimation of ETo, corn ET (ETc_corn, alfalfa ET (ETc_alf, net corn irrigation water requirements (IRncorn and net alfalfa irrigation water requirements (IRnalf. Nevertheless, this effect was never greater than ±0.5% over annual scale time. The wind speed variable (Scenario 3 was the third variable more influential in the fluctuations (± of evapotranspiration, followed by solar radiation. Considering the accuracy for all sensors over annual scale time, the variation was about ±1% of ETo, ETc_corn, ETc_alf, IRncorn, and IRnalf. The fluctuations of evapotranspiration were higher at shorter time scale. ETo daily fluctuation remained lower than 5 % during the growing season of corn and
System-theoretic analysis of due-time performance in production systems
Jacobs David; Meerkov Semyon M.
1995-01-01
Along with the average production rate, the due-time performance is an important characteristic of manufacturing systems. Unlike the production rate, the due-time performance has received relatively little attention in the literature, especially in the context of large volume production. This paper is devoted to this topic. Specifically, the notion of due-time performance is formalized as the probability that the number of parts produced during the shipping period reaches the required shipme...
A mathematical model for estimating the vibration due to basting in Sarcheshmeh Copper Mine
International Nuclear Information System (INIS)
Hossaini, M. F.; Javaherian, A.; Pourghasemi Sagand, M.
2002-01-01
Ground vibration due to blasting is the research subject of many investigations. A lot of works have been done in order to estimate the quality and quantity of this blasting outcome. Mathematical models proposed by various investigators are a noticeable result of these investigations. In this paper, the origin of these mathematical models are studied and the short comes are pointed out. With aid of real data a new empirical model has been proposed. This model is some modification of the previous ones with some different parameters. This investigation is based on analyzing the data obtained by operating 14 blasts in Sarcheshmeh Copper Mine whose data have been collected by seismographs installed in the area. In the proposed model, instead of the amount of charge exploded in each delay the amount of charge the vibration due to which is designed to facilitate application of a radial confining pressure to a grouted bolt while pulling it axially. During the test, both axial displacement of the bolt as well as the radial dilation of the grout was monitored. Few deformed bolts were designed and manufactured to study the effect of the shape of the ribs. While pull out, the cement that is captured between lugs will shear which in turn emphasizes the importance of shear strength of the grout annulus. In this report, details of the laboratory test results are presented and conclusions are given based on the obtained results
International Nuclear Information System (INIS)
Schwarz, G.; Dunning, D.E. Jr.
1982-01-01
An attempt has been made to quantify the variability in human biological parameters determining dose to man from ingestion of a unit activity of soluble 137 Cs and the resulting imprecision in the predicted total-body dose commitment. The analysis is based on an extensive review of the literature along with the application of statistical methods to determine parameter variability, correlations between parameters, and predictive imprecision. The variability in the principal biological parameters (biological half-time and total-body mass) involved can be described by a geometric standard deviation of 1.2-1.5 for adults and 1.6-1.9 for children/ adolescents of age 0.1-18 yr. The estimated predictive imprecision (using a Monte Carlo technique) in the total-body dose commitment from ingested 137 Cs can be described by a geometric standard deviation on the order of 1.3-1.4, meaning that the 99th percentile of the predicted distribution of dose is within approximately 2.1 times the mean value. The mean dose estimate is 0.009 Sv/MBq (34 mrem/μ Ci) for children/adolescents and 0.01 Sv/MBq (38 mrem/μ Ci) for adults. Little evidence of age dependence in the total-body dose from ingested 137 Cs is observed. (author)
On Assessment and Estimation of Potential Losses due to Land Subsidence in Urban Areas of Indonesia
Abidin, Hasanuddin Z.; Andreas, Heri; Gumilar, Irwan; Sidiq, Teguh P.
2016-04-01
subsidence have also relation among each other, the accurate quantification of the potential losses caused by land subsidence in urban areas is not an easy task to accomplish. The direct losses can be easier to estimate than the indirect losses. For example, the direct losses due to land subsidence in Bandung was estimated to be at least 180 Million USD; but the indirect losses is still unknown.
Single-machine common/slack due window assignment problems with linear decreasing processing times
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Limitations of the time slide method of background estimation
International Nuclear Information System (INIS)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis
2010-01-01
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
Limitations of the time slide method of background estimation
Energy Technology Data Exchange (ETDEWEB)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)
2010-10-07
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
Consequences of Secondary Calibrations on Divergence Time Estimates.
Directory of Open Access Journals (Sweden)
John J Schenk
Full Text Available Secondary calibrations (calibrations based on the results of previous molecular dating studies are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
The time aspect of bioenergy. Climate impacts of bioenergy due to differences in carbon uptake rates
Energy Technology Data Exchange (ETDEWEB)
Zetterberg, Lars [IVL Swedish Environmental Research Institute, Stockholm (Sweden); Chen, Deliang [Dept. of Earth Sciences, Univ. of Gothenburg, Gothenburg (Sweden)
2011-07-01
This paper investigates the climate impacts from bioenergy due to how they influence carbon stocks over time and more specifically how fast combustion related carbon emissions are compensated by uptake of atmospheric carbon. A set of fuel types representing different uptake rates are investigated, namely willow, branches and tops, stumps and coal. Net emissions are defined as emissions from utilizing the fuel minus emissions from a reference case of no utilisation. In the case of forest residues, the compensating 'uptake' is avoided emissions from the reference case of leaving the residues to decompose on the ground. Climate impacts are estimated using the measures radiative forcing and global average surface temperature, which have been calculated by an energy balance climate model. We conclude that there is a climate impact from using bioenergy due to how fast the emission pulse is compensated by uptake of atmospheric carbon (or avoided emissions). Biofuels with slower uptake rates have a stronger climate impact than fuels with a faster uptake rate, assuming all other parameters equal. The time perspective over which the analysis is done is crucial for the climate impact of biofuels. If only biogenic fluxes are considered, our results show that over a 100 year perspective branches and tops are better for climate mitigation than stumps which in turn are better than coal. Over a 20 year time perspective this conclusion holds, but the differences between these fuels are relatively smaller. Establishing willow on earlier crop land may reduce atmospheric carbon, provided new land is available. However, these results are inconclusive since we haven't considered the effects, if needed, of producing the traditional agricultural crops elsewhere. The analysis is not a life cycle assessment of different fuels and does therefore not consider the use of fossil fuels for logging, transportation and refining, other greenhouse gases than carbon or energy
International Nuclear Information System (INIS)
Brigido Flores, O.; Montalvan Estrada, A.; Fabelo Bonet, O.; Barreras Caballero, A.
2015-01-01
Cigarette smoking is one of the pathways that might contribute significantly to the increase in the radiation dose reaching man, due to the relatively large concentrations of polonium-210 found in tobacco leaves. The results of 200 Po determination on the 11 most frequently smoked brands of cigarettes and cigars which constitute over 75% of the total cigarette consumption in Cuba are presented and discussed. Moreover, the polonium content in cigarette smoke was estimated on the basis of its activity in cigarettes, ash, fresh filters and post-smoking filters. 210 Po was determined by gas flow proportional detector after spontaneous deposition of 210 Po on a high copper-content disk. The annual committed equivalent dose for lungs and the annual effective dose for smokers between 12-17 years old and for adults were calculated on the basis of the 210 Po inhalation through cigarette smoke. The results showed concentration ranging from 9.3 to 14.4 mBq per cigarette with a mean value of 11.8 ± 0.6 mBq.Cig -1 . the results of this work indicate that Cuban smokers who smoke one pack (20 cigarettes) per day inhale from 62 to 98 mBq.d -1 of 210 Po and smokers between 12-17 years old who consume 10 cigarettes daily inhale from 30-50 mBq.d -1 . The average committed equivalent dose for lungs is estimated to be 466 ± 36 and 780 ± 60 μSv.year -1 for young and adult smokers, respectively and annual committed effective dose is calculated to be 60 ± 5 and 100 ± 8 μSv for these two groups of smokers, respectively. (Author)
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
Multiple Estimation Architecture in Discrete-Time Adaptive Mixing Control
Directory of Open Access Journals (Sweden)
Simone Baldi
2013-05-01
Full Text Available Adaptive mixing control (AMC is a recently developed control scheme for uncertain plants, where the control action coming from a bank of precomputed controller is mixed based on the parameter estimates generated by an on-line parameter estimator. Even if the stability of the control scheme, also in the presence of modeling errors and disturbances, has been shown analytically, its transient performance might be sensitive to the initial conditions of the parameter estimator. In particular, for some initial conditions, transient oscillations may not be acceptable in practical applications. In order to account for such a possible phenomenon and to improve the learning capability of the adaptive scheme, in this paper a new mixing architecture is developed, involving the use of parallel parameter estimators, or multi-estimators, each one working on a small subset of the uncertainty set. A supervisory logic, using performance signals based on the past and present estimation error, selects the parameter estimate to determine the mixing of the controllers. The stability and robustness properties of the resulting approach, referred to as multi-estimator adaptive mixing control (Multi-AMC, are analytically established. Besides, extensive simulations demonstrate that the scheme improves the transient performance of the original AMC with a single estimator. The control scheme and the analysis are carried out in a discrete-time framework, for easier implementation of the method in digital control.
Energy Technology Data Exchange (ETDEWEB)
Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.
Joint Estimation and Decoding of Space-Time Trellis Codes
Directory of Open Access Journals (Sweden)
Zhang Jianqiu
2002-01-01
Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.
Time improvement of photoelectric effect calculation for absorbed dose estimation
International Nuclear Information System (INIS)
Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E
2007-01-01
Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant
Reliability of Bluetooth Technology for Travel Time Estimation
DEFF Research Database (Denmark)
Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh
2015-01-01
. However, their corresponding impacts on accuracy and reliability of estimated travel time have not been evaluated. In this study, a controlled field experiment is conducted to collect both Bluetooth and GPS data for 1000 trips to be used as the basis for evaluation. Data obtained by GPS logger is used...... to calculate actual travel time, referred to as ground truth, and to geo-code the Bluetooth detection events. In this setting, reliability is defined as the percentage of devices captured per trip during the experiment. It is found that, on average, Bluetooth-enabled devices will be detected 80% of the time......-range antennae detect Bluetooth-enabled devices in a closer location to the sensor, thus providing a more accurate travel time estimate. However, the smaller the size of the detection zone, the lower the penetration rate, which could itself influence the accuracy of estimates. Therefore, there has to be a trade...
Total sitting time, leisure time physical activity and risk of hospitalization due to low back pain
DEFF Research Database (Denmark)
Balling, Mie; Holmberg, Teresa; Petersen, Christina B
2018-01-01
AIMS: This study aimed to test the hypotheses that a high total sitting time and vigorous physical activity in leisure time increase the risk of low back pain and herniated lumbar disc disease. METHODS: A total of 76,438 adults answered questions regarding their total sitting time and physical...... activity during leisure time in the Danish Health Examination Survey 2007-2008. Information on low back pain diagnoses up to 10 September 2015 was obtained from The National Patient Register. The mean follow-up time was 7.4 years. Data were analysed using Cox regression analysis with adjustment...... disc disease. However, moderate or vigorous physical activity, as compared to light physical activity, was associated with increased risk of low back pain (HR = 1.16, 95% CI: 1.03-1.30 and HR = 1.45, 95% CI: 1.15-1.83). Moderate, but not vigorous physical activity was associated with increased risk...
System-theoretic analysis of due-time performance in production systems
Directory of Open Access Journals (Sweden)
David Jacobs
1995-01-01
Full Text Available Along with the average production rate, the due-time performance is an important characteristic of manufacturing systems. Unlike the production rate, the due-time performance has received relatively little attention in the literature, especially in the context of large volume production. This paper is devoted to this topic. Specifically, the notion of due-time performance is formalized as the probability that the number of parts produced during the shipping period reaches the required shipment size. This performance index is analyzed for both lean and mass manufacturing environments. In particular, it is shown that, to achieve a high due-time performance in a lean environment, the production system should be scheduled for a sufficiently small fraction of its average production rate. In mass production, due-time performance arbitrarily close to one can be achieved for any scheduling practice, up to the average production rate.
Analytical model for time to cover cracking in RC structures due to rebar corrosion
International Nuclear Information System (INIS)
Bhargava, Kapilesh; Ghosh, A.K.; Mori, Yasuhiro; Ramanujam, S.
2006-01-01
The structural degradation of concrete structures due to reinforcement corrosion is a major worldwide problem. Reinforcement corrosion causes a volume increase due to the oxidation of metallic iron, which is mainly responsible for exerting the expansive radial pressure at the steel-concrete interface and development of hoop tensile stresses in the surrounding concrete. Cracking occurs, once the maximum hoop tensile stress exceeds the tensile strength of the concrete. The cracking begins at the steel-concrete interface and propagates outwards and eventually results in the thorough cracking of the cover concrete and this would indicate the loss of service life for the corrosion affected structures. An analytical model is proposed to predict the time required for cover cracking and the weight loss of reinforcing bar in corrosion affected reinforced concrete structures. The modelling aspects of the residual strength of cracked concrete and the stiffness contribution from the combination of reinforcement and expansive corrosion products have also been incorporated in the model. The problem is modeled as a boundary value problem and the governing equations are expressed in terms of the radial displacement. The analytical solutions are presented considering a simple two-zone model for the cover concrete, viz. cracked or uncracked. Reasonable estimation of the various parameters in the model related to the composition and properties of expansive corrosion products based on the available published experimental data has also been discussed. The performance of the proposed corrosion cracking model is then investigated through its ability to reproduce available experimental trends. Reasonably good agreement between experimental results and the analytical predictions has been obtained. It has also been found that tensile strength and initial tangent modulus of cover concrete, annual mean corrosion rate and modulus of elasticity of reinforcement plus corrosion products combined
Estimation of resource savings due to fly ash utilization in road construction
Energy Technology Data Exchange (ETDEWEB)
Kumar, Subodh; Patil, C.B. [Centre for Energy Studies, Indian Institute of Technology, New Delhi 110016 (India)
2006-08-15
A methodology for estimation of natural resource savings due to fly ash utilization in road construction in India is presented. Analytical expressions for the savings of various resources namely soil, stone aggregate, stone chips, sand and cement in the embankment, granular sub-base (GSB), water bound macadam (WBM) and pavement quality concrete (PQC) layers of fly ash based road formation with flexible and rigid pavements of a given geometry have been developed. The quantity of fly ash utilized in these layers of different pavements has also been quantified. In the present study, the maximum amount of resource savings is found in GSB followed by WBM and other layers of pavement. The soil quantity saved increases asymptotically with the rise in the embankment height. The results of financial analysis based on Indian fly ash based road construction cost data indicate that the savings in construction cost decrease with the lead and the investment on this alternative is found to be financially attractive only for a lead less than 60 and 90km for flexible and rigid pavements, respectively. (author)
Performance estimation of control rod position indicator due to aging of magnet
International Nuclear Information System (INIS)
Yu, Je Yong; Kim, Ji Ho; Huh, Hyung; Choi, Myoung Hwan; Sohn, Dong Seong
2009-01-01
The Control Element Drive Mechanism (CEDM) for the integral reactor is designed to raise and lower the control rod in steps of 2mm in order to satisfy the design features of the integral reactor which are the soluble boron free operation and the use of a nuclear heating for the reactor start-up. The actual position of the control rod could be achieved to sense the magnet connected to the control rod by the position indicator around the upper pressure housing of CEDM. It is sufficient that the actual position information of control rod at 20mm interval from the position indicator is used for the core safety analysis. As the magnet moves upward along the position indicator assembly from the bottom to the top in the upper pressure housing, the output voltage increases linearly step-wise at 0.2VDC increments. Between every step there are transient areas which occur by a contact closing of three reed switches which is the 2-3-2 contact closing sequence. In this paper the output voltage signal corresponding to the position of control rod was estimated on the 2-1-2 contact closing sequence due to the aging of the magnet.
Estimates concentrations in bottled 222Rn of the dose due to mineral waters in Iran
International Nuclear Information System (INIS)
Assadi, M. R.; Esmaealnejad, M.; Rahmatinejad, Z.
2006-01-01
Radon is a radionuclide that has the main role in exposure. Radon in water causes exposure in whole body but the largest dose being received by the stomach, as EPA (Environmental Protection Agency) estimates that radon in drinking water causes about 168 cancer deaths per year: 89 p ercent f rom lung cancer caused by breathing released to the indoor air from water and 11 p ercent f rom stomach cancer caused by consuming water containing radon. Now days the consumption of bottled mineral waters has become very popular. As is known, some kinds of mineral waters contain naturally occurring radionuclides in higher concentration than the usual drinking (tap) water. Surveys and reports on radon in most surface waters is low compared with radon level in groundwater and mineral water. In our work, the concentration of Rn(222) was determined in some bottled mineral waters available in Iran , and in next step the dose contribution ; due to ingestion ; for 1 l d -1 bottled mineral water consumption.
International Nuclear Information System (INIS)
Bosko, A.; Croft, St.; Gulbransen, E.
2009-01-01
General purpose gamma scanners are often used to assay unknown drums that differ from those used to create the default calibration. This introduces a potential source of bias into the matrix correction when the correction is based on the estimation of the mean density of the drum contents from a weigh scale measurement. In this paper we evaluate the magnitude of this bias that may be introduced by performing assay measurements with a system whose matrix correction algorithm was calibrated with a set of standard drums but applied to a population of drums whose tare weight may be different. The matrix correction factors are perturbed in such cases because the unknown difference in tare weight gets reflected as a bias in the derived matrix density. This would be the only impact if the difference in tare weight was due solely to the weight of the lid or base, say. But in reality the reason for the difference may be because the steel wall of the drum is of a different thickness. Thus, there is an opposing interplay at work which tends to compensate. The purpose of this work is to evaluate and bound the magnitude of the resulting assay uncertainty introduced by tare weight variation. We compare the results obtained using simple analytical models and the 3-D ray tracing with ISOCS software to illustrate and quantify the problem. The numerical results allow a contribution to the Total Measurement Uncertainty (TMU) to be propagated into the final assay result. (authors)
Time of arrival based location estimation for cooperative relay networks
Çelebi, Hasari Burak
2010-09-01
In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.
Time of arrival based location estimation for cooperative relay networks
Ç elebi, Hasari Burak; Abdallah, Mohamed M.; Hussain, Syed Imtiaz; Qaraqe, Khalid A.; Alouini, Mohamed-Slim
2010-01-01
In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.
Estimating a population cumulative incidence under calendar time trends
DEFF Research Database (Denmark)
Hansen, Stefan N; Overgaard, Morten; Andersen, Per K
2017-01-01
BACKGROUND: The risk of a disease or psychiatric disorder is frequently measured by the age-specific cumulative incidence. Cumulative incidence estimates are often derived in cohort studies with individuals recruited over calendar time and with the end of follow-up governed by a specific date....... It is common practice to apply the Kaplan-Meier or Aalen-Johansen estimator to the total sample and report either the estimated cumulative incidence curve or just a single point on the curve as a description of the disease risk. METHODS: We argue that, whenever the disease or disorder of interest is influenced...
Eliminating bias in rainfall estimates from microwave links due to antenna wetting
Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch
2014-05-01
Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced
Similarity estimators for irregular and age uncertain time series
Rehfeld, K.; Kurths, J.
2013-09-01
Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many datasets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age uncertain time series. We compare the Gaussian-kernel based cross correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity
Similarity estimators for irregular and age-uncertain time series
Rehfeld, K.; Kurths, J.
2014-01-01
Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many data sets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age-uncertain time series. We compare the Gaussian-kernel-based cross-correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case, coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity
Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R
Directory of Open Access Journals (Sweden)
Terrance D. Savitsky
2016-08-01
Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot
International Nuclear Information System (INIS)
Mejia, A.A.; Nakamura, T.; Masatoshi, I.; Hatazawa, J.; Masaki, M.; Watanuki, S.
1991-01-01
Radiation absorbed doses due to intravenous administration of fluorine-18-fluorodeoxyglucose in positron emission tomography (PET) studies were estimated in normal volunteers. The time-activity curves were obtained for seven human organs (brain, heart, kidney, liver, lung, pancreas, and spleen) by using dynamic PET scans and for bladder content by using a single detector. These time-activity curves were used for the calculation of the cumulative activity in these organs. Absorbed doses were calculated by the MIRD method using the absorbed dose per unit of cumulated activity, 'S' value, transformed for the Japanese physique and the organ masses of the Japanese reference man. The bladder wall and the heart were the organs receiving higher doses of 1.2 x 10(-1) and 4.5 x 10(-2) mGy/MBq, respectively. The brain received a dose of 2.9 x 10(-2) mGy/MBq, and other organs received doses between 1.0 x 10(-2) and 3.0 x 10(-2) mGy/MBq. The effective dose equivalent was estimated to be 2.4 x 10(-2) mSv/MBq. These results were comparable to values of absorbed doses reported by other authors on the radiation dosimetry of this radiopharmaceutical
Directory of Open Access Journals (Sweden)
Chuan-Li Zhao
2014-01-01
Full Text Available This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d. It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in On4 time. For the model with job-independent position effects, we proved that the problems can be solved in On3 time by providing a dynamic programming algorithm.
Production loss due to new subclinical mastitis in Dutch dairy cows estimated iwth a test-day model
Halasa, T.; Nielen, M.; Roos, de S.; Hoorne, van R.; Jong, de G.; Lam, T.J.G.M.; Werven, van T.; Hogeveen, H.
2009-01-01
Milk, fat, and protein loss due to a new subclinical mastitis case may be economically important, and the objective of this study was to estimate this loss. The loss was estimated based on test-day (TD) cow records collected over a 1-yr period from 400 randomly selected Dutch dairy herds. After
Zhang, Kai; Batterman, Stuart A
2009-10-15
Traffic congestion increases air pollutant exposures of commuters and urban populations due to the increased time spent in traffic and the increased vehicular emissions that occur in congestion, especially "stop-and-go" traffic. Increased time in traffic also decreases time in other microenvironments, a trade-off that has not been considered in previous time activity pattern (TAP) analyses conducted for exposure assessment purposes. This research investigates changes in time allocations and exposures that result from traffic congestion. Time shifts were derived using data from the National Human Activity Pattern Survey (NHAPS), which was aggregated to nine microenvironments (six indoor locations, two outdoor locations and one transport location). After imputing missing values, handling outliers, and conducting other quality checks, these data were stratified by respondent age, employment status and period (weekday/weekend). Trade-offs or time-shift coefficients between time spent in vehicles and the eight other microenvironments were then estimated using robust regression. For children and retirees, congestion primarily reduced the time spent at home; for older children and working adults, congestion shifted the time spent at home as well as time in schools, public buildings, and other indoor environments. Changes in benzene and PM(2.5) exposure were estimated for the current average travel delay in the U.S. (9 min day(-1)) and other scenarios using the estimated time shifts coefficients, concentrations in key microenvironments derived from the literature, and a probabilistic analysis. Changes in exposures depended on the duration of the congestion and the pollutant. For example, a 30 min day(-1) travel delay was determined to account for 21+/-12% of current exposure to benzene and 14+/-8% of PM(2.5) exposure. The time allocation shifts and the dynamic approach to TAPs improve estimates of exposure impacts from congestion and other recurring events.
Smooth time-dependent receiver operating characteristic curve estimators.
Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos
2018-03-01
The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.
Estimated time spent on preventive services by primary care physicians
Directory of Open Access Journals (Sweden)
Gradison Margaret
2008-12-01
Full Text Available Abstract Background Delivery of preventive health services in primary care is lacking. One of the main barriers is lack of time. We estimated the amount of time primary care physicians spend on important preventive health services. Methods We analyzed a large dataset of primary care (family and internal medicine visits using the National Ambulatory Medical Care Survey (2001–4; analyses were conducted 2007–8. Multiple linear regression was used to estimate the amount of time spent delivering each preventive service, controlling for demographic covariates. Results Preventive visits were longer than chronic care visits (M = 22.4, SD = 11.8, M = 18.9, SD = 9.2, respectively. New patients required more time from physicians. Services on which physicians spent relatively more time were prostate specific antigen (PSA, cholesterol, Papanicolaou (Pap smear, mammography, exercise counseling, and blood pressure. Physicians spent less time than recommended on two "A" rated ("good evidence" services, tobacco cessation and Pap smear (in preventive visits, and one "B" rated ("at least fair evidence" service, nutrition counseling. Physicians spent substantial time on two services that have an "I" rating ("inconclusive evidence of effectiveness", PSA and exercise counseling. Conclusion Even with limited time, physicians address many of the "A" rated services adequately. However, they may be spending less time than recommended for important services, especially smoking cessation, Pap smear, and nutrition counseling. Future research is needed to understand how physicians decide how to allocate their time to address preventive health.
Template-Based Estimation of Time-Varying Tempo
Directory of Open Access Journals (Sweden)
Peeters Geoffroy
2007-01-01
Full Text Available We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a proposed reassigned spectral energy flux for the detection of musical events. The dominant periodicities of this flux are estimated by a proposed combination of discrete Fourier transform and frequency-mapped autocorrelation function. The most likely meter, beat, and tatum over time are then estimated jointly using proposed meter/beat subdivision templates and a Viterbi decoding algorithm. The performances of our system have been evaluated on four different test sets among which three were used during the ISMIR 2004 tempo induction contest. The performances obtained are close to the best results of this contest.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Time estimation in Parkinson's disease and degenerative cerebellar disease
Beudel, Martijin; Galama, Sjoukje; Leenders, Klaus L.; de Jong, Bauke M.
2008-01-01
With functional MRI, we recently identified fronto-cerebellar activations in predicting time to reach a target and basal ganglia activation in velocity estimation, that is, small interval assessment. We now tested these functions in patients with Parkinson's disease (PD) and degenerative cerebellar
On algebraic time-derivative estimation and deadbeat state reconstruction
DEFF Research Database (Denmark)
Reger, Johann; Jouffroy, Jerome
2009-01-01
This paper places into perspective the so-called algebraic time-derivative estimation method recently introduced by Fliess and co-authors with standard results from linear statespace theory for control systems. In particular, it is shown that the algebraic method can essentially be seen...
Only through perturbation can relaxation times be estimated
Czech Academy of Sciences Publication Activity Database
Ditlevsen, S.; Lánský, Petr
2012-01-01
Roč. 86, č. 5 (2012), 050102-5 ISSN 1539-3755 R&D Projects: GA ČR(CZ) GAP103/11/0282; GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:67985823 Keywords : stochastic diffusion * parameter estimation * time constant Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.313, year: 2012
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
Aircraft Fault Detection Using Real-Time Frequency Response Estimation
Grauer, Jared A.
2016-01-01
A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.
Estimate of Passive Time Reversal Communication Performance in Shallow Water
Directory of Open Access Journals (Sweden)
Sunhyo Kim
2017-12-01
Full Text Available Time reversal processes have been used to improve communication performance in the severe underwater communication environment characterized by significant multipath channels by reducing inter-symbol interference and increasing signal-to-noise ratio. In general, the performance of the time reversal is strongly related to the behavior of the q -function, which is estimated by a sum of the autocorrelation of the channel impulse response for each channel in the receiver array. The q -function depends on the complexity of the communication channel, the number of channel elements and their spacing. A q -function with a high side-lobe level and a main-lobe width wider than the symbol duration creates a residual ISI (inter-symbol interference, which makes communication difficult even after time reversal is applied. In this paper, we propose a new parameter, E q , to describe the performance of time reversal communication. E q is an estimate of how much of the q -function lies within one symbol duration. The values of E q were estimated using communication data acquired at two different sites: one in which the sound speed ratio of sediment to water was less than unity and one where the ratio was higher than unity. Finally, the parameter E q was compared to the bit error rate and the output signal-to-noise ratio obtained after the time reversal operation. The results show that these parameters are strongly correlated to the parameter E q .
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Real-time fault-tolerant moving horizon air data estimation for the RECONFIGURE benchmark
Wan, Y.; Keviczky, T.
2018-01-01
This paper proposes a real-time fault-tolerant estimation approach for combined sensor fault diagnosis and air data reconstruction. Due to simultaneous influence of winds and latent faults on monitored sensors, it is challenging to address the tradeoff between robustness to wind disturbances and
International Nuclear Information System (INIS)
Bourgois, L.
2011-01-01
When handling radioactive β emitters, measurements in terms of personal dose equivalents H p (0.07) are used to estimate the equivalent dose limit to skin or extremities given by regulations. First of all, analytical expressions for individual dose equivalents H p (0.07) and equivalent doses to the extremities H skin are given for a point source and for contamination with a radionuclide β emitter. Second of all, operational quantities and protection quantities are compared. It is shown that in this case the operational quantities significantly overstate the protection quantities. For a skin contamination the ratio between operational quantities and protection quantities is 2 for a maximum β energy of 3 MeV and 90 for a maximum β energy of 150 keV. (author)
Soft sensor for real-time cement fineness estimation.
Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir
2015-03-01
This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC
Directory of Open Access Journals (Sweden)
Francisco Nombela
2015-08-01
Full Text Available Broadband Power Line Communications (PLC have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
Density perturbations due to the inhomogeneous discrete spatial structure of space-time
International Nuclear Information System (INIS)
Wolf, C.
1998-01-01
For the case that space-time permits an inhomogeneous discrete spatial structure due to varying gravitational fields or a foam-like structure of space-time, it is demonstrated that thermodynamic reasoning implies that matter-density perturbations will arise in the early universe
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
Time-to-contact estimation modulated by implied friction.
Yamada, Yuki; Sasaki, Kyoshiro; Miura, Kayo
2014-01-01
The present study demonstrated that friction cues for target motion affect time-to-contact (TTC) estimation. A circular target moved in a linear path with a constant velocity and was gradually occluded by a static rectangle. The target moved with forward and backward spins or without spin. Observers were asked to respond at the time when the moving target appeared to pass the occluder. The results showed that TTC was significantly longer in the backward spin condition than in the forward and without-spin conditions. Moreover, similar results were obtained when a sound was used to imply friction. Our findings indicate that the observer's experiential knowledge of motion coupled with friction intuitively modulated their TTC estimation.
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Time Delay Estimation in Room Acoustic Environments: An Overview
Directory of Open Access Journals (Sweden)
Benesty Jacob
2006-01-01
Full Text Available Time delay estimation has been a research topic of significant practical importance in many fields (radar, sonar, seismology, geophysics, ultrasonics, hands-free communications, etc.. It is a first stage that feeds into subsequent processing blocks for identifying, localizing, and tracking radiating sources. This area has made remarkable advances in the past few decades, and is continuing to progress, with an aim to create processors that are tolerant to both noise and reverberation. This paper presents a systematic overview of the state-of-the-art of time-delay-estimation algorithms ranging from the simple cross-correlation method to the advanced blind channel identification based techniques. We discuss the pros and cons of each individual algorithm, and outline their inherent relationships. We also provide experimental results to illustrate their performance differences in room acoustic environments where reverberation and noise are commonly encountered.
Schroedinger operators - geometric estimates in terms of the occupation time
International Nuclear Information System (INIS)
Demuth, M.; Kirsch, W.; McGillivray, I.
1995-01-01
The difference of Schroedinger and Dirichlet semigroups is expressed in terms of the Laplace transform of the Brownian motion occupation time. This implies quantitative upper and lower bounds for the operator norms of the corresponding resolvent differences. One spectral theoretical consequence is an estimate for the eigenfunction for a Schroedinger operator in a ball where the potential is given as a cone indicator function. 12 refs
Estimation of Continuous Time Models in Economics: an Overview
Clifford R. Wymer
2009-01-01
The dynamics of economic behaviour is often developed in theory as a continuous time system. Rigorous estimation and testing of such systems, and the analysis of some aspects of their properties, is of particular importance in distinguishing between competing hypotheses and the resulting models. The consequences for the international economy during the past eighteen months of failures in the financial sector, and particularly the banking sector, make it essential that the dynamics of financia...
Van, Mien; Ge, Shuzhi Sam; Ren, Hongliang
2016-04-28
In this paper, a novel finite time fault tolerant control (FTC) is proposed for uncertain robot manipulators with actuator faults. First, a finite time passive FTC (PFTC) based on a robust nonsingular fast terminal sliding mode control (NFTSMC) is investigated. Be analyzed for addressing the disadvantages of the PFTC, an AFTC are then investigated by combining NFTSMC with a simple fault diagnosis scheme. In this scheme, an online fault estimation algorithm based on time delay estimation (TDE) is proposed to approximate actuator faults. The estimated fault information is used to detect, isolate, and accommodate the effect of the faults in the system. Then, a robust AFTC law is established by combining the obtained fault information and a robust NFTSMC. Finally, a high-order sliding mode (HOSM) control based on super-twisting algorithm is employed to eliminate the chattering. In comparison to the PFTC and other state-of-the-art approaches, the proposed AFTC scheme possess several advantages such as high precision, strong robustness, no singularity, less chattering, and fast finite-time convergence due to the combined NFTSMC and HOSM control, and requires no prior knowledge of the fault due to TDE-based fault estimation. Finally, simulation results are obtained to verify the effectiveness of the proposed strategy.
Tsunami Amplitude Estimation from Real-Time GNSS.
Jeffries, C.; MacInnes, B. T.; Melbourne, T. I.
2017-12-01
Tsunami early warning systems currently comprise modeling of observations from the global seismic network, deep-ocean DART buoys, and a global distribution of tide gauges. While these tools work well for tsunamis traveling teleseismic distances, saturation of seismic magnitude estimation in the near field can result in significant underestimation of tsunami excitation for local warning. Moreover, DART buoy and tide gauge observations cannot be used to rectify the underestimation in the available time, typically 10-20 minutes, before local runup occurs. Real-time GNSS measurements of coseismic offsets may be used to estimate finite faulting within 1-2 minutes and, in turn, tsunami excitation for local warning purposes. We describe here a tsunami amplitude estimation algorithm; implemented for the Cascadia subduction zone, that uses continuous GNSS position streams to estimate finite faulting. The system is based on a time-domain convolution of fault slip that uses a pre-computed catalog of hydrodynamic Green's functions generated with the GeoClaw shallow-water wave simulation software and maps seismic slip along each section of the fault to points located off the Cascadia coast in 20m of water depth and relies on the principle of the linearity in tsunami wave propagation. The system draws continuous slip estimates from a message broker, convolves the slip with appropriate Green's functions which are then superimposed to produce wave amplitude at each coastal location. The maximum amplitude and its arrival time are then passed into a database for subsequent monitoring and display. We plan on testing this system using a suite of synthetic earthquakes calculated for Cascadia whose ground motions are simulated at 500 existing Cascadia GPS sites, as well as real earthquakes for which we have continuous GNSS time series and surveyed runup heights, including Maule, Chile 2010 and Tohoku, Japan 2011. This system has been implemented in the CWU Geodesy Lab for the Cascadia
Directory of Open Access Journals (Sweden)
Francesco Sarracino
2017-04-01
Full Text Available Recent studies documented that survey data contain duplicate records. We assess how duplicate records affect regression estimates, and we evaluate the effectiveness of solutions to deal with duplicate records. Results show that the chances of obtaining unbiased estimates when data contain 40 doublets (about 5% of the sample range between 3.5% and 11.5% depending on the distribution of duplicates. If 7 quintuplets are present in the data (2% of the sample, then the probability of obtaining biased estimates ranges between 11% and 20%. Weighting the duplicate records by the inverse of their multiplicity, or dropping superfluous duplicates outperform other solutions in all considered scenarios. Our results illustrate the risk of using data in presence of duplicate records and call for further research on strategies to analyze affected data.
A procedure for estimation of pipe break probabilities due to IGSCC
International Nuclear Information System (INIS)
Bergman, M.; Brickstad, B.; Nilsson, F.
1998-06-01
A procedure has been developed for estimation of the failure probability of welds joints in nuclear piping susceptible to intergranular stress corrosion cracking. The procedure aims at a robust and rapid estimate of the failure probability for a specific weld with known stress state. Random properties are taken into account of the crack initiation rate, the initial crack length, the in-service inspection efficiency and the leak rate. A computer realization of the procedure has been developed for user friendly applications by design engineers. Some examples are considered to investigate the sensitivity of the failure probability to different input quantities. (au)
Time-dependent inversion of surface subsidence due to dynamic reservoir compaction
Muntendam-Bos, A.G.; Kroon, I.C.; Fokker, P.A.
2008-01-01
We introduce a novel, time-dependent inversion scheme for resolving temporal reservoir pressure drop from surface subsidence observations (from leveling or GPS data, InSAR, tiltmeter monitoring) in a single procedure. The theory is able to accommodate both the absence of surface subsidence estimates
Time-dependent excitation and ionization modelling of absorption-line variability due to GRB080310
DEFF Research Database (Denmark)
Vreeswijk, P.M.; De Cia, A.; Jakobsson, P.
2013-01-01
.42743. To estimate the rest-frame afterglow brightness as a function of time, we use a combination of the optical VRI photometry obtained by the RAPTOR-T telescope array, which is presented in this paper, and Swift's X-Ray Telescope (XRT) observations. Excitation alone, which has been successfully applied...
International Nuclear Information System (INIS)
Fernandez Gomez, I.M.; Rodriguez Castro, G.; Perez Sanchez, D.
1996-01-01
The purpose of this paper is to study the radioactivity levels in the coffee produced and consumed in our country and to estimate the doses received by the Cuban people due to its consumption. As a results of this study the most relevant radionuclide was K 40 , due to the concentration levels found only considering it when estimating the annual committed dose that is received thought this via. The K 40 concentration value present in the infusion represented a dose for the consumer of 15.6 u Sv/year
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
Directory of Open Access Journals (Sweden)
Razi Ahmed
2013-06-01
Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.
ESTIMATION OF THE DECREASING OF 137 CS SEDIMENT IN THE SOIL DUE TO HORIZONTAL FLOWING
Directory of Open Access Journals (Sweden)
O. N. Prokof'ev
2008-01-01
Full Text Available The purpose of work is to estimate the possible decreasing of the density of 137 Cs sediment in the soil influenced by the horizontal flowing basing on the analysis of on location observations on the density of 137 Cs sediment in the soil after the Chernobyl accident.
Estimation of shutdown heat generation rates in GHARR-1 due to ...
African Journals Online (AJOL)
Fission products decay power and residual fission power generated after shutdown of Ghana Research Reactor-1 (GHARR-1) by reactivity insertion accident were estimated by solution of the decay and residual heat equations. A Matlab program code was developed to simulate the heat generation rates by fission product ...
Numerical Estimation of Fatigue Life of Wind Turbines due to Shadow Effect
DEFF Research Database (Denmark)
Thoft-Christensen, Palle; Pedersen, Ronnie; Nielsen, Søren R.K.
2009-01-01
The influence of tower design on damage accumulation in up-wind turbine blades during tower passage is discussed. The fatigue life of a blade is estimated for a tripod tower configuration and a standard mono-tower. The blade stresses are determined from a dynamic mechanical model with a delay...
Real-time estimation of differential piston at the LBT
Böhm, Michael; Pott, Jörg-Uwe; Sawodny, Oliver; Herbst, Tom; Kürster, Martin
2014-07-01
In this paper, we present and compare different strategies to minimize the effects of telescope vibrations to the differential piston (OPD) for LINC/NIRVANA at the LBT using an accelerometer feedforward compensation approach. We summarize why this technology is of importance for LINC/NIRVANA, but also for future telescopes and instruments. We outline the estimation problem in general and its specifics at the LBT. Model based estimation and broadband filtering techniques can be used to solve the estimation task, each having its own advantages and disadvantages, which will be discussed. Simulation results and measurements at the LBT are shown to motivate and support our choice of the estimation algorithm for the instrument LINC/NIRVANA. We explain our laboratory setup aimed at imitating the vibration behaviour at the LBT in general, and the M2 as main contributor in particular, and we demonstrate the controller's ability to suppress vibrations in the frequency range of 8 Hz to 60 Hz. In this range, telescope vibrations are the most dominant disturbance to the optical path. For our measurements, we introduce a disturbance time series which has a frequency spectrum comparable to what can be measured at the LBT on a typical night. We show promising experimental results, indicating the ability to suppress differential piston induced by telescope vibrations by a factor of about 5 (RMS), which is significantly better than any currently commissioned system.
International Nuclear Information System (INIS)
Sayed, N.S.; Salah Eldin, T.; Gomaa, M.A.; El Dosoky, T.M.
2011-01-01
Using UNSCEAR 2000 report to United Nation General Assembly and its appendices, Annual collective dose to Egyptian members of the public (75097301). Was estimated to be 252.5 man Sv , hence the average collective effective dose to air line passenger for 10 million is estimated as 25.25 micro Sievert. Furthermore using hypothetical approach for Egyptian passengers who fly locally, regionally and internationally, the collective dose was estimated to be 252.5 man Sv , hence the average average collective effective dose for Egyptian passenger is due to Aviation is 3.36 micro Sievert
Energy Technology Data Exchange (ETDEWEB)
Christofides, S [Medical Physics Department, Nicosia General Hospital (Cyprus)
1994-12-31
The Effective Dose Equivalent (EDE) to the Cypriot population due to Diagnostic Nuclear Medicine procedures has been estimated from data published by the Government of Cyprus, in its Health and Hospital Statistics Series for the years 1990, 1991, and 1992. The average EDE per patient was estimated to be 3,09, 3,75 and 4,01 microSievert for 1990, 1991 and 1992 respectively, while the per caput EDE was estimated to be 11,75, 15,16 and 17,09 microSieverts for 1990, 1991 and 1992 respectively, from the procedures in the public sector. (author). 11 refs, 4 tabs.
International Nuclear Information System (INIS)
Christofides, S.
1994-01-01
The Effective Dose Equivalent (EDE) to the Cypriot population due to Diagnostic Nuclear Medicine procedures has been estimated from data published by the Government of Cyprus, in its Health and Hospital Statistics Series for the years 1990, 1991, and 1992. The average EDE per patient was estimated to be 3,09, 3,75 and 4,01 microSievert for 1990, 1991 and 1992 respectively, while the per caput EDE was estimated to be 11,75, 15,16 and 17,09 microSieverts for 1990, 1991 and 1992 respectively, from the procedures in the public sector. (author)
Religious affiliation at time of death - Global estimates and projections.
Skirbekk, Vegard; Todd, Megan; Stonawski, Marcin
2018-03-01
Religious affiliation influences societal practices regarding death and dying, including palliative care, religiously acceptable health service procedures, funeral rites and beliefs about an afterlife. We aimed to estimate and project religious affiliation at the time of death globally, as this information has been lacking. We compiled data on demographic information and religious affiliation from more than 2500 surveys, registers and censuses covering 198 nations/territories. We present estimates of religious affiliation at the time of death as of 2010, projections up to and including 2060, taking into account trends in mortality, religious conversion, intergenerational transmission of religion, differential fertility, and gross migration flows, by age and sex. We find that Christianity continues to be the most common religion at death, although its share will fall from 37% to 31% of global deaths between 2010 and 2060. The share of individuals identifying as Muslim at the time of death increases from 21% to 24%. The share of religiously unaffiliated will peak at 17% in 2035 followed by a slight decline thereafter. In specific regions, such as Europe, the unaffiliated share will continue to rises from 14% to 21% throughout the period. Religious affiliation at the time of death is changing globally, with distinct regional patterns. This could affect spatial variation in healthcare and social customs relating to death and dying.
Real-time gaze estimation via pupil center tracking
Directory of Open Access Journals (Sweden)
Cazzato Dario
2018-02-01
Full Text Available Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.
Estimation of dynamic flux profiles from metabolic time series data
Directory of Open Access Journals (Sweden)
Chou I-Chun
2012-07-01
Full Text Available Abstract Background Advances in modern high-throughput techniques of molecular biology have enabled top-down approaches for the estimation of parameter values in metabolic systems, based on time series data. Special among them is the recent method of dynamic flux estimation (DFE, which uses such data not only for parameter estimation but also for the identification of functional forms of the processes governing a metabolic system. DFE furthermore provides diagnostic tools for the evaluation of model validity and of the quality of a model fit beyond residual errors. Unfortunately, DFE works only when the data are more or less complete and the system contains as many independent fluxes as metabolites. These drawbacks may be ameliorated with other types of estimation and information. However, such supplementations incur their own limitations. In particular, assumptions must be made regarding the functional forms of some processes and detailed kinetic information must be available, in addition to the time series data. Results The authors propose here a systematic approach that supplements DFE and overcomes some of its shortcomings. Like DFE, the approach is model-free and requires only minimal assumptions. If sufficient time series data are available, the approach allows the determination of a subset of fluxes that enables the subsequent applicability of DFE to the rest of the flux system. The authors demonstrate the procedure with three artificial pathway systems exhibiting distinct characteristics and with actual data of the trehalose pathway in Saccharomyces cerevisiae. Conclusions The results demonstrate that the proposed method successfully complements DFE under various situations and without a priori assumptions regarding the model representation. The proposed method also permits an examination of whether at all, to what degree, or within what range the available time series data can be validly represented in a particular functional format of
Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation
Directory of Open Access Journals (Sweden)
Alexandre Bryan Heinemann
2012-01-01
Full Text Available Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, biomass, leaf area (LAI and total accumulated solar radiation (SRA during the crop cycle. The accuracy of the 5 models for estimated daily solar radiation was similar, and it was not substantially different among sites. For water limited environments (no irrigation, crop model outputs yield, biomass and LAI was not sensitive for the uncertainties in radiation models studied here.
Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation
Alexandre Bryan Heinemann; Pepijn A.J. van Oort; Diogo Simões Fernandes; Aline de Holanda Nunes Maia
2012-01-01
Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs) data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, ...
2009-01-01
Abstract This study estimates mercury and methylmercury levels in fish and fishery products commercialized in the city of Barcelona from 2001 to 2007. Combining food levels data with the consumption data of 2158 people (as the median of two 24-hour-recall), the total Mercury intake of the Catalonian population was calculated. Mercury was detected in 32,8% of analyses samples. The general population average weekly intake of total mercury in Catalonian population was 0.783 ?g/k...
One strategy for estimating the potential soil carbon storage due to CO2 fertilization
International Nuclear Information System (INIS)
Harrison, K.G.; Bonani, G.
1994-01-01
Soil radiocarbon measurements can be used to estimate soil carbon turnover rates and inventories. A labile component of soil carbon has the potential to respond to perturbations such as CO 2 fertilization, changing climate, and changing land use. Soil carbon has influenced past and present atmospheric CO 2 levels and will influence future levels. A model is used to calculate the amount of additional carbon stored in soil because of CO 2 fertilization
An estimate of energy dissipation due to soil-moisture hysteresis
McNamara, H.
2014-01-01
Processes of infiltration, transport, and outflow in unsaturated soil necessarily involve the dissipation of energy through various processes. Accounting for these energetic processes can contribute to modeling hydrological and ecological systems. The well-documented hysteretic relationship between matric potential and moisture content in soil suggests that one such mechanism of energy dissipation is associated with the cycling between wetting and drying processes, but it is challenging to estimate the magnitude of the effect in situ. The Preisach model, a generalization of the Independent Domain model, allows hysteresis effects to be incorporated into dynamical systems of differential equations. Building on earlier work using such systems with field data from the south-west of Ireland, this work estimates the average rate of hysteretic energy dissipation. Through some straightforward assumptions, the magnitude of this rate is found to be of O(10-5) W m-3. Key Points Hysteresis in soil-water dissipates energy The rate of dissipation can be estimated directly from saturation data The rate of heating caused is significant ©2013. American Geophysical Union. All Rights Reserved.
Estimation of loss due to post harvest diseases of potato in markets ...
African Journals Online (AJOL)
Administrator
2011-09-26
Sep 26, 2011 ... percentage loss of potatoes due to important diseases occurring in the ... Survey of diseased potato and Collection of disease sample ... alcohol swab. A piece of ..... temperature of 17 to 25°C over 70% relative humidity and.
Parametric estimation of time varying baselines in airborne interferometric SAR
DEFF Research Database (Denmark)
Mohr, Johan Jacob; Madsen, Søren Nørvang
1996-01-01
A method for estimation of time varying spatial baselines in airborne interferometric synthetic aperture radar (SAR) is described. The range and azimuth distortions between two images acquired with a non-linear baseline are derived. A parametric model of the baseline is then, in a least square...... sense, estimated from image shifts obtained by cross correlation of numerous small patches throughout the image. The method has been applied to airborne EMISAR imagery from the 1995 campaign over the Storstrommen Glacier in North East Greenland conducted by the Danish Center for Remote Sensing. This has...... reduced the baseline uncertainties from several meters to the centimeter level in a 36 km scene. Though developed for airborne SAR the method can easily be adopted to satellite data...
Risk Reduction Effects Due to the Start Time Extension of EDGs in OPR-1000
International Nuclear Information System (INIS)
Lim, Ho-Gon; Yang, Joon-Eon; Hwang, Mee-Jeong
2006-01-01
Under the condition that the ECCS rule in Korea will be revised based on the new U.S. 10 CFR 50.46, the risk impact due to the EDG start time extension is analyzed in the present study. This paper is composed of 6 sections. In the section 2, the LOCA break size that cannot be mitigable under the condition of extended EDG start time is obtained from the thermal hydraulic analysis. The section 3 discusses the frequency of the immitigable LOCA and the probability of the LOOP given a LOCA. In the section 4, the effect of the EDG start time extension on its failure probability is discussed with a qualitative manner. Finally, the whole risk change due to the EDG start time extension is calculated in the section 5 with the conclusions given in the section 6
Multimodel estimates of premature human mortality due to intercontinental transport of air pollution
Liang, C.; Silva, R.; West, J. J.; Sudo, K.; Lund, M. T.; Emmons, L. K.; Takemura, T.; Bian, H.
2015-12-01
Numerous modeling studies indicate that emissions from one continent influence air quality over others. Reducing air pollutant emissions from one continent can therefore benefit air quality and health on multiple continents. Here, we estimate the impacts of the intercontinental transport of ozone (O3) and fine particulate matter (PM2.5) on premature human mortality by using an ensemble of global chemical transport models coordinated by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP). We use simulations of 20% reductions of all anthropogenic emissions from 13 regions (North America, Central America, South America, Europe, Northern Africa, Sub-Saharan Africa, Former Soviet Union, Middle East, East Asia, South Asia, South East Asia, Central Asia, and Australia) to calculate their impact on premature mortality within each region and elsewhere in the world. To better understand the impact of potential control strategies, we also analyze premature mortality for global 20% perturbations from five sectors individually: power and industry, ground transport, forest and savannah fires, residential, and others (shipping, aviation, and agriculture). Following previous studies, premature human mortality resulting from each perturbation scenario is calculated using a health impact function based on a log-linear model for O3 and an integrated exposure response model for PM2.5 to estimate relative risk. The spatial distribution of the exposed population (adults aged 25 and over) is obtained from the LandScan 2011 Global Population Dataset. Baseline mortality rates for chronic respiratory disease, ischemic heart disease, cerebrovascular disease, chronic obstructive pulmonary disease, and lung cancer are estimated from the GBD 2010 country-level mortality dataset for the exposed population. Model results are regridded from each model's original grid to a common 0.5°x0.5° grid used to estimate mortality. We perform uncertainty analysis and evaluate the sensitivity
An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels
Directory of Open Access Journals (Sweden)
Jian Li
2005-04-01
Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
Continuous Fine-Fault Estimation with Real-Time GNSS
Norford, B. B.; Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.; Senko, J.; Larsen, D.
2017-12-01
Thousands of real-time telemetered GNSS stations operate throughout the circum-Pacific that may be used for rapid earthquake characterization and estimation of local tsunami excitation. We report on the development of a GNSS-based finite-fault inversion system that continuously estimates slip using real-time GNSS position streams from the Cascadia subduction zone and which is being expanded throughout the circum-Pacific. The system uses 1 Hz precise point position streams computed in the ITRF14 reference frame using clock and satellite orbit corrections from the IGS. The software is implemented as seven independent modules that filter time series using Kalman filters, trigger and estimate coseismic offsets, invert for slip using a non-negative least squares method developed by Lawson and Hanson (1974) and elastic half-space Green's Functions developed by Okada (1985), smooth the results temporally and spatially, and write the resulting streams of time-dependent slip to a RabbitMQ messaging server for use by downstream modules such as tsunami excitation modules. Additional fault models can be easily added to the system for other circum-Pacific subduction zones as additional real-time GNSS data become available. The system is currently being tested using data from well-recorded earthquakes including the 2011 Tohoku earthquake, the 2010 Maule earthquake, the 2015 Illapel earthquake, the 2003 Tokachi-oki earthquake, the 2014 Iquique earthquake, the 2010 Mentawai earthquake, the 2016 Kaikoura earthquake, the 2016 Ecuador earthquake, the 2015 Gorkha earthquake, and others. Test data will be fed to the system and the resultant earthquake characterizations will be compared with published earthquake parameters. Seismic events will be assumed to occur on major faults, so, for example, only the San Andreas fault will be considered in Southern California, while the hundreds of other faults in the region will be ignored. Rake will be constrained along each subfault to be
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Single-machine scheduling with release dates, due dates and family setup times
Schutten, Johannes M.J.; van de Velde, S.L.; van de Velde, S.L.; Zijm, Willem H.M.
1996-01-01
We address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery
Single-machine scheduling with release dates, due dates, and family setup times
J.M.J. Schutten (Marco); S.L. van de Velde (Steef); W.H.M. Zijm
1996-01-01
textabstractWe address the NP-hard problem of scheduling n independent jobs with release dates, due dates, and family setup times on a single machine to minimize the maximum lateness. This problem arises from the constant tug-of-war going on in manufacturing between efficient production and delivery
Empirical estimation of the arrival time of ICME Shocks
Shaltout, Mosalam
Empirical estimation of the arrival time of ICME Shocks Mosalam Shaltout1 ,M.Youssef 1and R.Mawad2 1 National Research Institute of Astronomy and Geophysics (NRIAG) ,Helwan -Cairo-Egypt Email: mosalamshaltout@hotmail.com 2 Faculty of Science-Monifiia University-Physics Department-Shiben Al-Koum -Monifiia-Egypt We are got the Data of the SSC events from Preliminary Reports of the ISGI (Institut de Physique du Globe, France) .Also we are selected the same CME interval 1996-2005 from SOHO/LASCO/C2.We have estimated the arrival time of ICME shocks during solar cycle 23rd (1996-2005), we take the Sudden storm commencement SSC as a indicator of the arrival of CMEs at the Earth's Magnetosphere (ICME).Under our model ,we selected 203 ICME shock-SSC associated events, we got an imperial relation between CME velocity and their travel time, from which we obtained high correlation between them, R=0.75.
International Nuclear Information System (INIS)
Teles, Pedro; Vaz, Pedro; Paulo, Graciano; Santos, Joana; Pascoal, Ana; Lanca, Isabel; Matela, Nuno; Sousa, Patrick; Carvoeiras, Pedro; Parafita, Rui; Simaozinho, Paula
2013-01-01
In order to assess the exposure of the Portuguese population to ionizing radiation due to medical examinations of diagnostic radiology and nuclear medicine, a working group, consisting of 40 institutions, public and private, was created to evaluation the coletive dose in the Portuguese population in 2010. This work was conducted in collaboration with the Dose Datamed European consortium, which aims to assess the exposure of the European population to ionizing radiation due to 20 diagnostic radiology examinations most frequent in Europe (the 'TOP 20') and nuclear medicine examinations. We obtained an average value of collective dose of ≈ 1 mSv/caput, which puts Portugal in the category of countries medium to high exposure to Europe. We hope that this work can be a starting point to bridge the persistent lack of studies in the areas referred to in Portugal, and to enable the characterization periodic exposure of the Portuguese population to ionizing radiation in the context of medical applications
ESTIMATION OF PHASE DELAY DUE TO PRECIPITABLE WATER FOR DINSARBASED LAND DEFORMATION MONITORING
Directory of Open Access Journals (Sweden)
J. Susaki
2017-09-01
Full Text Available In this paper, we present a method for using the estimated precipitable water (PW to mitigate atmospheric phase delay in order to improve the accuracy of land-deformation assessment with differential interferometric synthetic aperture radar (DInSAR. The phase difference obtained from multi-temporal synthetic aperture radar images contains errors of several types, and the atmospheric phase delay can be an obstacle to estimating surface subsidence. In this study, we calculate PW from external meteorological data. Firstly, we interpolate the data with regard to their spatial and temporal resolutions. Then, assuming a range direction between a target pixel and the sensor, we derive the cumulative amount of differential PW at the height of the slant range vector at pixels along that direction. The atmospheric phase delay of each interferogram is acquired by taking a residual after a preliminary determination of the linear deformation velocity and digital elevation model (DEM error, and by applying high-pass temporal and low-pass spatial filters. Next, we estimate a regression model that connects the cumulative amount of PW and the atmospheric phase delay. Finally, we subtract the contribution of the atmospheric phase delay from the phase difference of the interferogram, and determine the linear deformation velocity and DEM error. The experimental results show a consistent relationship between the cumulative amount of differential PW and the atmospheric phase delay. An improvement in land-deformation accuracy is observed at a point at which the deformation is relatively large. Although further investigation is necessary, we conclude at this stage that the proposed approach has the potential to improve the accuracy of the DInSAR technique.
Estimation of health effects due to elevated radiation exposure levels in structures
International Nuclear Information System (INIS)
Marks, S.; Cross, F.T.; Denham, D.H.; Kennedy, W.E. Jr.
1985-02-01
Uranium mill tailings were used as landfill for many years in the United States before the health risk associated with such use was recognized. Occupants of buildings erected on or adjacent to contaminated landfills may experience radiation exposures sufficient to warrant remedial action. Estimates of the cost-effectiveness of the remedial measures may be provided using a combination of occupancy data, appropriate risk coefficients and projected costs. This effort is in support of decisions by the US Department of Energy (DOE) to conduct remedial action at such locations. The methods used in this project, with examples of their application, will be presented in this paper
An Iterative Method for Estimating Airfoil Deformation due to Solid Particle Erosion
Directory of Open Access Journals (Sweden)
Valeriu DRAGAN
2014-04-01
Full Text Available Helicopter blades are currently constructed with composite materials enveloping honeycomb cores with only the leading and trailing edges made of metal alloys. In some cases, the erosive wear of the bound between the composite skin and metallic leading edge leads to full blade failure. It is therefore the goal of this paper to provide a method for simulating the way an airfoil is deformed through the erosion process. The method involves computational fluid dynamics simulations, scripts for automatic meshing and spreadsheet calculators for estimating the erosion and, ultimately, the airfoil deformation. Further work could include more complex meshing scripts allowing the use of similar methods for turbo-machineries.
Estimation of vegetation cover resilience from satellite time series
Directory of Open Access Journals (Sweden)
T. Simoniello
2008-07-01
Full Text Available Resilience is a fundamental concept for understanding vegetation as a dynamic component of the climate system. It expresses the ability of ecosystems to tolerate disturbances and to recover their initial state. Recovery times are basic parameters of the vegetation's response to forcing and, therefore, are essential for describing realistic vegetation within dynamical models. Healthy vegetation tends to rapidly recover from shock and to persist in growth and expansion. On the contrary, climatic and anthropic stress can reduce resilience thus favouring persistent decrease in vegetation activity.
In order to characterize resilience, we analyzed the time series 1982–2003 of 8 km GIMMS AVHRR-NDVI maps of the Italian territory. Persistence probability of negative and positive trends was estimated according to the vegetation cover class, altitude, and climate. Generally, mean recovery times from negative trends were shorter than those estimated for positive trends, as expected for vegetation of healthy status. Some signatures of inefficient resilience were found in high-level mountainous areas and in the Mediterranean sub-tropical ones. This analysis was refined by aggregating pixels according to phenology. This multitemporal clustering synthesized information on vegetation cover, climate, and orography rather well. The consequent persistence estimations confirmed and detailed hints obtained from the previous analyses. Under the same climatic regime, different vegetation resilience levels were found. In particular, within the Mediterranean sub-tropical climate, clustering was able to identify features with different persistence levels in areas that are liable to different levels of anthropic pressure. Moreover, it was capable of enhancing reduced vegetation resilience also in the southern areas under Warm Temperate sub-continental climate. The general consistency of the obtained results showed that, with the help of suited analysis
Estimation of modal parameters using bilinear joint time frequency distributions
Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.
2007-07-01
In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.
Truong, Khoa D; Reifsnider, Odette S; Mayorga, Maria E; Spitler, Hugh
2013-05-01
The objective of this study was to estimate the aggregate burden of maternal binge drinking on preterm birth (PTB) and low birth weight (LBW) across American sociodemographic groups in 2008. To estimate the aggregate burden of maternal binge drinking on preterm birth (PTB) and low birth weight (LBW) across American sociodemographic groups in 2008. A simulation model was developed to estimate the number of PTB and LBW cases due to maternal binge drinking. Data inputs for the model included number of births and rates of preterm and LBW from the National Center for Health Statistics; female population by childbearing age groups from the U.S. Census; increased relative risks of preterm and LBW deliveries due to maternal binge drinking extracted from the literature; and adjusted prevalence of binge drinking among pregnant women estimated in a multivariate logistic regression model using Behavioral Risk Factor Surveillance System survey. The most conservative estimates attributed maternal binge drinking to 8,701 (95% CI: 7,804-9,598) PTBs (1.75% of all PTBs) and 5,627 (95% CI 5,121-6,133) LBW deliveries in 2008, with 3,708 (95% CI: 3,375-4,041) cases of both PTB and LBW. The estimated rate of PTB due to maternal binge drinking was 1.57% among all PTBs to White women, 0.69% among Black women, 3.31% among Hispanic women, and 2.35% among other races. Compared to other age groups, women ages 40-44 had the highest adjusted binge drinking rate and highest PTB rate due to maternal binge drinking (4.33%). Maternal binge drinking contributed significantly to PTB and LBW differentially across sociodemographic groups.
Haugen, Anne Julsrud; Grøvle, Lars; Brox, Jens Ivar; Natvig, Bård; Keller, Anne; Soldal, Dag; Grotle, Margreth
2011-10-01
The objectives were to estimate the cut-off points for success on different sciatica outcome measures and to determine the success rate after an episode of sciatica by using these cut-offs. A 12-month multicenter observational study was conducted on 466 patients with sciatica and lumbar disc herniation. The cut-off values were estimated by ROC curve analyses using Completely recovered or Much better on a 7-point global change scale as external criterion for success. The cut-off values (references in brackets) at 12 months were leg pain VAS 17.5 (0-100), back pain VAS 22.5 (0-100), Sciatica Bothersomeness Index 6.5 (0-24), Maine-Seattle Back Questionnaire 4.5 (0-12), and the SF-36 subscales bodily pain 51.5, and physical functioning 81.7 (0-100, higher values indicate better health). In conclusion, the success rates at 12 months varied from 49 to 58% depending on the measure used. The proposed cut-offs may facilitate the comparison of success rates across studies.
International Nuclear Information System (INIS)
Lihavainen, Heikki; Asmi, Eija; Aaltonen, Veijo; Makkonen, Ulla; Kerminen, Veli-Matti
2015-01-01
We used more than five years of continuous aerosol measurements to estimate the direct radiative feedback parameter associated with the formation of biogenic secondary organic aerosol (BSOA) at a remote continental site at the edge of the boreal forest zone in Northern Finland. Our upper-limit estimate for this feedback parameter during the summer period (ambient temperatures above 10 °C) was −97 ± 66 mW m −2 K −1 (mean ± STD) when using measurements of the aerosol optical depth (f AOD ) and −63 ± 40 mW m −2 K −1 when using measurements of the ‘dry’ aerosol scattering coefficient at the ground level (f σ ). Here STD represents the variability in f caused by the observed variability in the quantities used to derive the value of f. Compared with our measurement site, the magnitude of the direct radiative feedback associated with BSOA is expected to be larger in warmer continental regions with more abundant biogenic emissions, and even larger in regions where biogenic emissions are mixed with anthropogenic pollution. (letter)
Time-to-impact estimation in passive missile warning systems
Şahıngıl, Mehmet Cihan
2017-05-01
A missile warning system can detect the incoming missile threat(s) and automatically cue the other Electronic Attack (EA) systems in the suit, such as Directed Infrared Counter Measure (DIRCM) system and/or Counter Measure Dispensing System (CMDS). Most missile warning systems are currently based on passive sensor technology operating in either Solar Blind Ultraviolet (SBUV) or Midwave Infrared (MWIR) bands on which there is an intensive emission from the exhaust plume of the threatening missile. Although passive missile warning systems have some clear advantages over pulse-Doppler radar (PDR) based active missile warning systems, they show poorer performance in terms of time-to-impact (TTI) estimation which is critical for optimizing the countermeasures and also "passive kill assessment". In this paper, we consider this problem, namely, TTI estimation from passive measurements and present a TTI estimation scheme which can be used in passive missile warning systems. Our problem formulation is based on Extended Kalman Filter (EKF). The algorithm uses the area parameter of the threat plume which is derived from the used image frame.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Pulsar timing residuals due to individual non-evolving gravitational wave sources
International Nuclear Information System (INIS)
Tong Ming-Lei; Zhao Cheng-Shi; Yan Bao-Rong; Yang Ting-Gao; Gao Yu-Ping
2014-01-01
The pulsar timing residuals induced by gravitational waves from non-evolving single binary sources are affected by many parameters related to the relative positions of the pulsar and the gravitational wave sources. We will analyze the various effects due to different parameters. The standard deviations of the timing residuals will be calculated with a variable parameter fixing a set of other parameters. The orbits of the binary sources will be generally assumed to be elliptical. The influences of different eccentricities on the pulsar timing residuals will also be studied in detail. We find that the effects of the related parameters are quite different, and some of them display certain regularities
International Nuclear Information System (INIS)
Jung, Won Dae; Park, Jink Yun
2012-01-01
It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.
Estimation of errors due to inhomogeneous distribution of radionuclides in lungs
International Nuclear Information System (INIS)
Pelled, O.; German, U.; Pollak, G.; Alfassi, Z.B.
2006-01-01
The uncertainty in the activity determination of uranium contamination due to real inhomogeneous distribution and assumption of homogenous distribution can reach more than one order of magnitude when using one detector in a set of 4 detectors covering most of the whole lungs. Using the information from several detectors may improve the accuracy, as obtained by summing the responses from the 3 or 4 detectors. However, even with this improvement, the errors are still very large, up to almost a factor of 10 when the analysis is based on the 92 keV energy peak and up to 7 for the 185 keV peak
Estimates of thermal fatigue due to beam interruptions for an ALMR-type ATW
International Nuclear Information System (INIS)
Dunn, F. E.; Wade, D. C.
1999-01-01
Thermal fatigue due to beam interruptions has been investigated in a sodium cooled ATW using the Advanced Liquid Metal mod B design as a basis for the subcritical source driven reactor. A k eff of 0.975 was used for the reactor. Temperature response in the primary coolant system was calculated, using the SASSYS- 1 code, for a drop in beam current from full power to zero in 1 microsecond.. Temperature differences were used to calculate thermal stresses. Fatigue curves from the American Society of Mechanical Engineers Boiler and Pressure Vessel Code were used to determine the number of cycles various components should be designed for, based on these thermal stresses
Vero, S E; Ibrahim, T G; Creamer, R E; Grant, J; Healy, M G; Henry, T; Kramers, G; Richards, K G; Fenton, O
2014-12-01
The true efficacy of a programme of agricultural mitigation measures within a catchment to improve water quality can be determined only after a certain hydrologic time lag period (subsequent to implementation) has elapsed. As the biophysical response to policy is not synchronous, accurate estimates of total time lag (unsaturated and saturated) become critical to manage the expectations of policy makers. The estimation of the vertical unsaturated zone component of time lag is vital as it indicates early trends (initial breakthrough), bulk (centre of mass) and total (Exit) travel times. Typically, estimation of time lag through the unsaturated zone is poor, due to the lack of site specific soil physical data, or by assuming saturated conditions. Numerical models (e.g. Hydrus 1D) enable estimates of time lag with varied levels of input data. The current study examines the consequences of varied soil hydraulic and meteorological complexity on unsaturated zone time lag estimates using simulated and actual soil profiles. Results indicated that: greater temporal resolution (from daily to hourly) of meteorological data was more critical as the saturated hydraulic conductivity of the soil decreased; high clay content soils failed to converge reflecting prevalence of lateral component as a contaminant pathway; elucidation of soil hydraulic properties was influenced by the complexity of soil physical data employed (textural menu, ROSETTA, full and partial soil water characteristic curves), which consequently affected time lag ranges; as the importance of the unsaturated zone increases with respect to total travel times the requirements for high complexity/resolution input data become greater. The methodology presented herein demonstrates that decisions made regarding input data and landscape position will have consequences for the estimated range of vertical travel times. Insufficiencies or inaccuracies regarding such input data can therefore mislead policy makers regarding
Preliminary Estimation of Radioactive Cesium Concentration due to Hypothetical Accident in East Sea
Energy Technology Data Exchange (ETDEWEB)
Min, Byung-Il; Kim, Sora; Park, Kihyun; Suh, Kyung-suk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
The sea has no large islands, bays or capes. Its water balance is mostly determined by the inflow (Korea Strait) and outflow (Tsugaru Strait and Soya Strait) through the straits connecting it to the neighboring seas and Pacific Ocean. All of the Korean nuclear power plants are located in the coastal area, 3 sites in the east coast and 1 site in the west coast. So the Korean nuclear power plants there may be possibility that such dangerous substances spread out of the East Sea. The East Sea is a fertile fishing ground for surrounding counties. The environmental radionuclides concentration estimation is important for fish and sea plants may be contaminated by those radioactive materials. In order to simplify the problem, the experiment has been considered the many simplifying assumptions. The bed sediments are uniform over all the model domain, using the monthly mean ocean current data set and ignored effect of the facilities for damage preventions.
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10−3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10−3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal. PMID:27348624
Detilleux, J; Kastelic, J P; Barkema, H W
2015-03-01
Milk losses associated with mastitis can be attributed to either effects of pathogens per se (i.e., direct losses) or effects of the immune response triggered by intramammary infection (indirect losses). The distinction is important in terms of mastitis prevention and treatment. Regardless, the number of pathogens is often unknown (particularly in field studies), making it difficult to estimate direct losses, whereas indirect losses can be approximated by measuring the association between increased somatic cell count (SCC) and milk production. An alternative is to perform a mediation analysis in which changes in milk yield are allocated into their direct and indirect components. We applied this method on data for clinical mastitis, milk and SCC test-day recordings, results of bacteriological cultures (Escherichia coli, Staphylococcus aureus, Streptococcus uberis, coagulase-negative staphylococci, Streptococcus dysgalactiae, and streptococci other than Strep. dysgalactiae and Strep. uberis), and cow characteristics. Following a diagnosis of clinical mastitis, the cow was treated and changes (increase or decrease) in milk production before and after a diagnosis were interpreted counterfactually. On a daily basis, indirect changes, mediated by SCC increase, were significantly different from zero for all bacterial species, with a milk yield decrease (ranging among species from 4 to 33g and mediated by an increase of 1000 SCC/mL/day) before and a daily milk increase (ranging among species from 2 to 12g and mediated by a decrease of 1000 SCC/mL/day) after detection. Direct changes, not mediated by SCC, were only different from zero for coagulase-negative staphylococci before diagnosis (72g per day). We concluded that mixed structural equation models were useful to estimate direct and indirect effects of the presence of clinical mastitis on milk yield. Copyright © 2015 Elsevier B.V. All rights reserved.
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10-3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10-3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal.
Critical time delay of the pineal melatonin rhythm in humans due to weak electromagnetic exposure.
Halgamuge, Malka N
2013-08-01
Electromagnetic fields (EMFs) can increase free radicals, activate the stress response and alter enzyme reactions. Intracellular signalling is mediated by free radicals and enzyme kinetics is affected by radical pair recombination rates. The magnetic field component of an external EMF can delay the "recombination rate" of free radical pairs. Magnetic fields thus increase radical life-times in biological systems. Although measured in nanoseconds, this extra time increases the potential to do more damage. Melatonin regulates the body's sleep-wake cycle or circadian rhythm. The World Health Organization (WHO) has confirmed that prolonged alterations in sleep patterns suppress the body's ability to make melatonin. Considerable cancer rates have been attributed to the reduction of melatonin production as a result of jet lag and night shift work. In this study, changes in circadian rhythm and melatonin concentration are observed due to the external perturbation of chemical reaction rates. We further analyze the pineal melatonin rhythm and investigate the critical time delay or maturation time of radical pair recombination rates, exploring the impact of the mRNA degradation rate on the critical time delay. The results show that significant melatonin interruption and changes to the circadian rhythm occur due to the perturbation of chemical reaction rates, as also reported in previous studies. The results also show the influence of the mRNA degradation rate on the circadian rhythm's critical time delay or maturation time. The results support the hypothesis that exposure to weak EMFs via melatonin disruption can adversely affect human health.
Life time estimation for irradiation assisted mechanical cracking of PWR RCCA rodlets
Energy Technology Data Exchange (ETDEWEB)
Matsuoka, Takanori; Yamaguchi, Youichirou [Nuclear Development Corp., Tokai, Ibaraki (Japan)
1999-09-01
Intergranular cracks of cladding tubes had been observed at the tips of the rodlets of PWR rod cluster control assemblies (RCCAs). Because RCCAs were important core components, an investigation was carried out to estimate their service lifetime. The reviews on their mechanism and the life time estimation are shown in this paper. The summaries are as follows. (1) The mechanism of the intergranular crack of the cladding tube was not IASCC but irradiation assisted mechanical cracking (IAMC) caused by an increase in hoop strain due to the swelling of the absorber and a decrease in elongation due to neutron irradiation. (2) The crack initiation limit of cylindrical shells made of low ductile material and subjected to internal pressure was determined in relation to the uniform strain of the material and was in accordance with that of the RCCA rodlets in an actual plant. (3) From the above investigation, the method of estimating the lifetime and countermeasures for its extension were obtained. (author)
Beckmann, Kerri R; Lynch, John W; Hiller, Janet E; Farshid, Gelareh; Houssami, Nehmat; Duffy, Stephen W; Roder, David M
2015-03-15
Debate about the extent of breast cancer over-diagnosis due to mammography screening has continued for over a decade, without consensus. Estimates range from 0 to 54%, but many studies have been criticized for having flawed methodology. In this study we used a novel study design to estimate over-diagnosis due to organised mammography screening in South Australia (SA). To estimate breast cancer incidence at and following screening we used a population-based, age-matched case-control design involving 4,931 breast cancer cases and 22,914 controls to obtain OR for yearly time intervals since women's last screening mammogram. The level of over-diagnosis was estimated by comparing the cumulative breast cancer incidence with and without screening. The former was derived by applying ORs for each time window to incidence rates in the absence of screening, and the latter, by projecting pre-screening incidence rates. Sensitivity analyses were undertaken to assess potential biases. Over-diagnosis was estimated to be 8% (95%CI 2-14%) and 14% (95%CI 8-19%) among SA women aged 45 to 85 years from 2006-2010, for invasive breast cancer and all breast cancer respectively. These estimates were robust when applying various sensitivity analyses, except for adjustment for potential confounding assuming higher risk among screened than non-screened women, which reduced levels of over-diagnosis to 1% (95%CI 5-7%) and 8% (95%CI 2-14%) respectively when incidence rates for screening participants were adjusted by 10%. Our results indicate that the level of over-diagnosis due to mammography screening is modest and considerably lower than many previous estimates, including others for Australia. © 2014 UICC.
Carbon dioxide emissions due to Swedish imports and consumption: estimates using different methods
International Nuclear Information System (INIS)
Carlsson-Kanyama, Annika; Assefa, Getachew; Wadeskog, Anders
2007-04-01
Global trade of products and services challenges the traditional way in which emissions of carbon dioxide are declared and accounted for. Instead of only considering territorial emissions there are now strong reasons to determine how the carbon dioxide emitted in the production of imports are partitioned around the world and how the total emissions change for a country's final consumption compared to final production. In this report results from four different methods of calculating the total carbon dioxide emissions from Sweden's overall consumption are presented. Total carbon dioxide emissions for Sweden's final consumption vary from 57 to 109 M tons during one year depending on the methodology. The four methods used for estimating these emissions give results of 57, 61, 68 and 109 Mton of carbon dioxide. Two methods are based on information concerning Sweden's imports and our national production of goods and services excluding production that is exported while two methods are based on final consumer expenditures. Three of the methods use mainly emission data from Sweden while one method depends entirely upon emission data from Sweden's trading partners. The last method also gives the highest emissions level, 109 Mton of carbon dioxide. The calculations performed here can be compared to the emissions reported by Sweden, 54 Mton of carbon dioxide per year. Our estimates give per capita emission levels of between 6,3 and 12 tons of carbon dioxide per year. The estimate of 12 tons per capita is a result of using emissions data from Sweden's trading partners. The total emissions as a result of Sweden's imports are 26 or 74 M tons of carbon dioxide depending on how they are calculated. The lower figure is based upon the imports of today but with emissions as if everything was produced as in Sweden. The higher level is based upon using existing but partly inadequate international emission statistics. These levels can be compared to the about 35 M tons of carbon dioxide
Risk estimates of stochastic effects due to exposure to radiation - a stochastic harm index
International Nuclear Information System (INIS)
Gonen, Y.G.
1980-01-01
The effects of exposure to low level radiation on the survival probability and life expectancy were investigated. The 1977 vital statistics of Jewish males in Israel were used as a baseline, mainly the data on normalized survival probability and life expectation as functions of age. Assumed effects of exposure were superposed on these data and the net differences calculated. It was found that the realistic rate effects of exposure to radiation are generally less than calculated by multiplying the collective dose by the risk factor. The effects are strongly age-dependent, decreasing sharply with age at exposure. The assumed harm due to exposure can be more than offset by improvements in medical care and safety. (H.K.)
Estimating Orion Heat Shield Failure Due To Ablator Cracking During The EFT-1 Mission
Vander Kam, Jeremy C.; Gage, Peter
2016-01-01
The Orion EFT-1 heatshield suffered from two major certification challenges: First, the mechanical properties used in design were not evident in the flight hardware and second, the flight article itself cracked during fabrication. The combination of these events motivated the Orion Program to pursue an engineering-level Probabilistic Risk Assessment (PRA) as part of heatshield certification rationale. The PRA provided loss of Mission (LOM) likelihoods considering the probability of a crack occurring during the mission and the likelihood of subsequent structure over-temperature. The methods and input data for the PRA are presented along with a discussion of the test data used to anchor the results. The Orion program accepted an EFT-1 Loss of Vehicle (LOV) risk of 1-in-160,000 due to in-mission Avcoat cracking based on the results of this analysis. Conservatisms in the result, along with future considerations for Exploration Missions (EM) are also addressed.
International Nuclear Information System (INIS)
Kobayashi, Ken-ichi; Yamada, Jun-ichi
2010-01-01
Simplified inelastic design procedures for elevated temperature components have been required to reduce simulation cost and to shorten a period of time for developing new projects. Stress redistribution locus (SRL) method has been proposed to provide a reasonable estimate employing both the elastic FEM analysis and a unique hyperbolic curve: ε tilde={1/σ tilde + (κ - 1)σ tilde}/κ, where ε tilde and σ tilde show dimensionless strain and stress normalized by corresponding elastic ones, respectively. This method is based on a fact that stress distribution in well deformed or high temperature components would change with deformation or time, and that the relation between the dimensionless stress and strain traces a kind of the elastic follow-up locus in spite of the constitutive equation of material and loading modes. In this paper, FEM analyses incorporating plasticity and creep in were performed for a tapered nozzle in reactor vessel under some thermal transient loads through the nozzle thickness. The normalized stress and strain was compared with the proposed SRL curve. Calculation results revealed that a critical point in the tapered nozzle due to the thermal transient load depended on a descending rate of temperature from the higher temperature in the operation cycle. Since the inelastic behavior in the nozzle resulted in a restricted area, the relationship between the normalized stress and strain was depicted inside the proposed SRL curve: Coefficient κ of the SRL in analyses is greater than the proposed one, and the present criterion guarantees robust structures for complicated components involving inelastic deformation. (author)
Lubey, D.; Scheeres, D.
Tracking objects in Earth orbit is fraught with complications. This is due to the large population of orbiting spacecraft and debris that continues to grow, passive (i.e. no direct communication) and data-sparse observations, and the presence of maneuvers and dynamics mismodeling. Accurate orbit determination in this environment requires an algorithm to capture both a system's state and its state dynamics in order to account for mismodelings. Previous studies by the authors yielded an algorithm called the Optimal Control Based Estimator (OCBE) - an algorithm that simultaneously estimates a system's state and optimal control policies that represent dynamic mismodeling in the system for an arbitrary orbit-observer setup. The stochastic properties of these estimated controls are then used to determine the presence of mismodelings (maneuver detection), as well as characterize and reconstruct the mismodelings. The purpose of this paper is to develop the OCBE into an accurate real-time orbit tracking and maneuver detection algorithm by automating the algorithm and removing its linear assumptions. This results in a nonlinear adaptive estimator. In its original form the OCBE had a parameter called the assumed dynamic uncertainty, which is selected by the user with each new measurement to reflect the level of dynamic mismodeling in the system. This human-in-the-loop approach precludes real-time application to orbit tracking problems due to their complexity. This paper focuses on the Adaptive OCBE, a version of the estimator where the assumed dynamic uncertainty is chosen automatically with each new measurement using maneuver detection results to ensure that state uncertainties are properly adjusted to account for all dynamic mismodelings. The paper also focuses on a nonlinear implementation of the estimator. Originally, the OCBE was derived from a nonlinear cost function then linearized about a nominal trajectory, which is assumed to be ballistic (i.e. the nominal optimal
Nonlinear response of vessel walls due to short-time thermomechanical loading
International Nuclear Information System (INIS)
Pfeiffer, P.A.; Kulak, R.F.
1994-01-01
Maintaining structural integrity of the reactor pressure vessel (RPV) during a postulated core melt accident is an important safety consideration in the design of the vessel. This study addresses the failure predictions of the vessel due to thermal and pressure loadings fro the molten core debris depositing on the lower head of the vessel. Different loading combinations were considered based on the dead load, yield stress assumptions, material response and internal pressurization. The analyses considered only short term failure (quasi static) modes, long term failure modes were not considered. Short term failure modes include plastic instabilities of the structure and failure due to exceeding the failure strain. Long term failure odes would be caused by creep rupture that leads to plastic instability of the structure. Due to the sort time durations analyzed, creep was not considered in the analyses presented
Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion
Blomquist, Mats; Wernersson, Ake V.
1999-11-01
When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.
Estimation of wildfire size and risk changes due to fuels treatments
Cochrane, M.A.; Moran, C.J.; Wimberly, M.C.; Baer, A.D.; Finney, M.A.; Beckendorf, K.L.; Eidenshink, J.; Zhu, Z.
2012-01-01
Human land use practices, altered climates, and shifting forest and fire management policies have increased the frequency of large wildfires several-fold. Mitigation of potential fire behaviour and fire severity have increasingly been attempted through pre-fire alteration of wildland fuels using mechanical treatments and prescribed fires. Despite annual treatment of more than a million hectares of land, quantitative assessments of the effectiveness of existing fuel treatments at reducing the size of actual wildfires or how they might alter the risk of burning across landscapes are currently lacking. Here, we present a method for estimating spatial probabilities of burning as a function of extant fuels treatments for any wildland fire-affected landscape. We examined the landscape effects of more than 72 000 ha of wildland fuel treatments involved in 14 large wildfires that burned 314 000 ha of forests in nine US states between 2002 and 2010. Fuels treatments altered the probability of fire occurrence both positively and negatively across landscapes, effectively redistributing fire risk by changing surface fire spread rates and reducing the likelihood of crowning behaviour. Trade offs are created between formation of large areas with low probabilities of increased burning and smaller, well-defined regions with reduced fire risk.
International Nuclear Information System (INIS)
Si-Young Chang; Jeong-Ho Lee; Chung-Woo Ha
1993-01-01
Lifetime excess lung cancer risk due to indoor 222 Rn daughters exposure in Korea was quantitatively estimated by a modified relative risk projection model proposed by the U.S. National Academy of Science and the recent Korean life table data. The lifetime excess risk of lung cancer death attributable to annual constant exposure to Korean indoor radon daughters was estimated to be about 230/10 6 per WLM, which seemed to be nearly in the median of the range of 150-450/10 6 per WLM reported by the UNSCEAR in 1988. (1 fig., 2 tabs.)
Implications on 1 + 1 D Tsunami Runup Modeling due to Time Features of the Earthquake Source
Fuentes, M.; Riquelme, S.; Ruiz, J.; Campos, J.
2018-04-01
The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1 + 1 D solution for the shoreline motion time series, from the static case to the kinematic case, by including both rise time and rupture velocity. Our results show that the static case corresponds to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum runup may be affected by very slow ruptures and long rise time. Parametric analysis reveals that runup is strictly decreasing with the rise time while is highly amplified in a certain range of slow rupture velocities. For even lower rupture velocities, the tsunami excitation vanishes and for larger, quicker approaches to the instantaneous case.
Implications on 1 + 1 D Tsunami Runup Modeling due to Time Features of the Earthquake Source
Fuentes, M.; Riquelme, S.; Ruiz, J.; Campos, J.
2018-02-01
The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1 + 1 D solution for the shoreline motion time series, from the static case to the kinematic case, by including both rise time and rupture velocity. Our results show that the static case corresponds to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum runup may be affected by very slow ruptures and long rise time. Parametric analysis reveals that runup is strictly decreasing with the rise time while is highly amplified in a certain range of slow rupture velocities. For even lower rupture velocities, the tsunami excitation vanishes and for larger, quicker approaches to the instantaneous case.
Estimation of effective dose from the atmospheric nuclear tests due to the intake of marine products
International Nuclear Information System (INIS)
Nakano, Masanao
2008-01-01
The worldwide environmental protection is required by the public. A long-term environmental assessment from nuclear fuel cycle facilities to the aquatic environment also becomes more important to understand long-term risk of nuclear energy. Evaluation of long-term risk including not only in Japan but also in neighboring countries is considered to be necessary in order to develop sustainable nuclear power industry. The author successfully simulated the distribution of radionuclides in seawater and seabed sediment produced by atmospheric nuclear tests using LAMER (Long-term Assessment ModEl of Radionuclides in the oceans). A part of the LAMER calculated the advection-diffusion-scavenging processes for radionuclides in the oceans and the Japan Sea in cooperate with Oceanic General Circulation Model (OGCM) and was validated. The author is challenging to calculate probabilistic effective dose suggested by ICRP from intake of marine products due to atmospheric nuclear tests using the Monte Carlo method in the other part of LAMER. Depending on the deviation of each parameter, the 95th percentile of the probabilistic effective dose was from one third to two thirds of the 95th percentile of the deterministic effective dose in proforma calculation. It means that probabilistic assessment can contribute to the design and optimisation of a nuclear fuel cycle facility. (author)
Real-time yield estimation based on deep learning
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
International Nuclear Information System (INIS)
Nagy, D.
2007-01-01
Previous conceptual studies made clear that the ITER blanket concept and segmentation is not suitable for the environment of a potential fusion power plant (DEMO). One promising concept to be used instead is the so-called Multi-Module-Segment (MMS) concept. Each MMS consists of a number of blankets arranged on a strong back plate thus forming ''banana'' shaped in-board (IB) and out-board (OB) segments. With respect to port size, weight, or other limiting aspects the IB and OB MMS are segmented in toroidal direction. The number of segments to be replaced would be below 100. For this segmentation concept a new maintenance scenario had to be worked out. The aim of this paper is to present a promising MMS maintenance scenario, a flexible scheme for time estimations under varying boundary conditions and preliminary time estimates. According to the proposed scenario two upper, vertical arranged maintenance ports have to be opened for blanket maintenance on opposite sides of the tokamak. Both ports are central to a 180 degree sector and the MMS are removed and inserted through both ports. In-vessel machines are operating to transport the elements in toroidal direction and also to insert and attach the MMS to the shield. Outside the vessel the elements have to be transported between the tokamak and the hot cell to be refurbished. Calculating the maintenance time for such a scenario is rather challenging due to the numerous parallel processes involved. For this reason a flexible, multi-level calculation scheme has been developed in which the operations are organized into three levels: At the lowest level the basic maintenance steps are determined. These are organized into maintenance sequences that take into account parallelisms in the system. Several maintenance sequences constitute the maintenance phases which correspond to a certain logistics scenario. By adding the required times of the maintenance phases the total maintenance time is obtained. The paper presents
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
Shiri, Rahman; Kausto, Johanna; Martimo, Kari-Pekka; Kaila-Kangas, Leena; Takala, Esa-Pekka; Viikari-Juntura, Eira
2013-01-01
Previously we reported that early part-time sick leave enhances return to work (RTW) among employees with musculoskeletal disorders (MSD). This paper assesses the health-related effects of this intervention. Patients aged 18-60 years who were unable to perform their regular work due to MSD were randomized to part- or full-time sick leave groups. In the former, workload was reduced by halving working time. Using validated questionnaires, we assessed pain intensity and interference with work and sleep, region-specific disability due to MSD, self-rated general health, health-related quality of life (measured via EuroQol), productivity loss, depression, and sleep disturbance at baseline, 1, 3, 8, 12, and 52 weeks. We analyzed the repeated measures data (171-356 observations) with the generalized estimating equation approach. The intervention (part-time sick leave) and control (full-time sick leave) groups did not differ with regard to pain intensity, pain interference with work and sleep, region-specific disability, productivity loss, depression, or sleep disturbance. The intervention group reported better self-rated general health (adjusted P=0.07) and health-related quality of life (adjusted P=0.02) than the control group. In subgroup analyses, the intervention was more effective among the patients whose current problem began occurring part-time sick leave did not exacerbate pain-related symptoms and functional disability, but improved self-rated general health and health-related quality of life in the early stage of work disability due to MSD.
MODELING TIME DISPERSION DUE TO OPTICAL PATH LENGTH DIFFERENCES IN SCINTILLATION DETECTORS*
Moses, W.W.; Choong, W.-S.; Derenzo, S.E.
2015-01-01
We characterize the nature of the time dispersion in scintillation detectors caused by path length differences of the scintillation photons as they travel from their generation point to the photodetector. Using Monte Carlo simulation, we find that the initial portion of the distribution (which is the only portion that affects the timing resolution) can usually be modeled by an exponential decay. The peak amplitude and decay time depend both on the geometry of the crystal, the position within the crystal that the scintillation light originates from, and the surface finish. In a rectangular parallelpiped LSO crystal with 3 mm × 3 mm cross section and polished surfaces, the decay time ranges from 10 ps (for interactions 1 mm from the photodetector) up to 80 ps (for interactions 50 mm from the photodetector). Over that same range of distances, the peak amplitude ranges from 100% (defined as the peak amplitude for interactions 1 mm from the photodetector) down to 4% for interactions 50 mm from the photodetector. Higher values for the decay time are obtained for rough surfaces, but the exact value depends on the simulation details. Estimates for the decay time and peak amplitude can be made for different cross section sizes via simple scaling arguments. PMID:25729464
Razafindrakoto, H. N. T.
2014-03-25
One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.
Razafindrakoto, H. N. T.; Mai, Paul Martin
2014-01-01
One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.
Recent salmon declines: a result of lost feeding opportunities due to bad timing?
Directory of Open Access Journals (Sweden)
Cedar M Chittenden
Full Text Available As the timing of spring productivity blooms in near-shore areas advances due to warming trends in global climate, the selection pressures on out-migrating salmon smolts are shifting. Species and stocks that leave natal streams earlier may be favoured over later-migrating fish. The low post-release survival of hatchery fish during recent years may be in part due to static release times that do not take the timing of plankton blooms into account. This study examined the effects of release time on the migratory behaviour and survival of wild and hatchery-reared coho salmon (Oncorhynchus kisutch using acoustic and coded-wire telemetry. Plankton monitoring and near-shore seining were also conducted to determine which habitat and food sources were favoured. Acoustic tags (n = 140 and coded-wire tags (n = 266,692 were implanted into coho salmon smolts at the Seymour and Quinsam Rivers, in British Columbia, Canada. Differences between wild and hatchery fish, and early and late releases were examined during the entire lifecycle. Physiological sampling was also carried out on 30 fish from each release group. The smolt-to-adult survival of coho salmon released during periods of high marine productivity was 1.5- to 3-fold greater than those released both before and after, and the fish's degree of smoltification affected their downstream migration time and duration of stay in the estuary. Therefore, hatchery managers should consider having smolts fully developed and ready for release during the peak of the near-shore plankton blooms. Monitoring chlorophyll a levels and water temperature early in the spring could provide a forecast of the timing of these blooms, giving hatcheries time to adjust their release schedule.
Directory of Open Access Journals (Sweden)
Sri Legowo
2009-11-01
Full Text Available Sedimentation is such a crucial issue to be noted once the accumulated sediment begins to fill the reservoir dead storage, this will then influence the long-term reservoir operation. The sediment accumulated requires a serious attention for it may influence the storage capacity and other reservoir management of activities. The continuous inflow of sediment to the reservoir will decrease the capacity of reservoir storage, the reservoir value in use, and the useful age of reservoir. Because of that, the rate of the sediment needs to be delayed as possible. In this research, the delay of the sediment rate is considered based on the rate of flow of landslide of the reservoir slope. The rate of flow of the sliding slope can be minimized by way of each reservoir autonomous efforts. This effort can be performed through; the regulation of fluctuating rate of reservoir surface current that does not cause suddenly drawdown and upraising as well. The research model is compiled using the searching technique of Non Linear Programming (NLP.The rate of bank erosion for the reservoir variates from 0.0009 to 0.0048 MCM/year, which is no sigrificant value to threaten the life time of reservoir.Mean while the rate of watershed sediment has a significant value, i.e: 3,02 MCM/year for Saguling that causes to fullfill the storage capacity in 40 next years (from years 2008.
Estimation of the Past and Future Infrastructure Damage Due the Permafrost Evolution Processes
Sergeev, D. O.; Chesnokova, I. V.; Morozova, A. V.
2015-12-01
The geocryological processes such as thermokarst, frost heaving and fracturing, icing, thermal erosion are the source of immediate danger for the structures. The economic losses during the construction procedures in the permafrost area are linked also with the other geological processes that have the specific character in cold regions. These processes are swamping, desertification, deflation, flooding, mudflows and landslides. Linear transport structures are most vulnerable component of regional and national economy. Because the high length the transport structures have to cross the landscapes with different permafrost conditions that have the different reaction to climate change. The climate warming is favorable for thermokarst and the frost heaving is linked with climate cooling. In result the structure falls in the circumstances that are not predicted in the construction project. Local engineering problems of structure exploitation lead to global risks of sustainable development of regions. Authors developed the database of geocryological damage cases for the last twelve years at the Russian territory. Spatial data have the attributive table that was filled by the published information from various permafrost conference proceedings. The preliminary GIS-analysis of gathered data showed the widespread territorial distribution of the cases of negative consequences of geocryological processes activity. The information about maximum effect from geocryological processes was validated by detailed field investigation along the railways in Yamal and Transbaicalia Regions. Authors expect the expanding of database by similar data from other sectors of Arctic. It is important for analyzing the regional, time and industrial tendencies of geocryological risk evolution. Obtained information could be used in insurance procedures and in information systems of decisions support in different management levels. The investigation was completed with financial support by Russian
Energy Technology Data Exchange (ETDEWEB)
Waterman, J., E-mail: jay.waterman@pg.canterbury.ac.nz [Department of Mechanical Engineering, University of Canterbury, Christchurch (New Zealand); Pietak, A. [Department of Anatomy and Structural Biology, University of Otago, Dunedin (New Zealand); Birbilis, N. [Department of Materials Engineering, Monash University (Australia); Woodfield, T. [Department of Mechanical Engineering, University of Canterbury, Christchurch (New Zealand); Department of Orthopaedic Surgery, University of Otago, Christchurch (New Zealand); Dias, G. [Department of Anatomy and Structural Biology, University of Otago, Dunedin (New Zealand); Staiger, M.P., E-mail: mark.staiger@canterbury.ac.nz [Department of Mechanical Engineering, University of Canterbury, Christchurch (New Zealand)
2011-12-15
Calcium phosphate coatings were prepared on magnesium substrates via a biomimetic coating process. The effects of a magnesium hydroxide pretreatment on the formation and the ultimate corrosion protection of the coatings were studied. The pretreatment layer was found to affect the amount of defects present in the coatings. Corrosion resistance of the coatings was studied in vitro using two simulated body fluids, 0.8% NaCl and Hanks solution. In NaCl, the resistance to corrosion of all samples decreases with time as corrosion proceeded through cracks and other defects in the coatings. Samples with no pretreatment displayed the highest corrosion resistance as these samples had the fewest defects in the coating. However, in Hanks solution, corrosion resistance increased with time due to additional nucleation of calcium phosphate from the fluid on to the substrate. In this solution, additional pretreatment time was beneficial to the overall corrosion resistance.
Estimating negative binomial parameters from occurrence data with detection times.
Hwang, Wen-Han; Huggins, Richard; Stoklosa, Jakub
2016-11-01
The negative binomial distribution is a common model for the analysis of count data in biology and ecology. In many applications, we may not observe the complete frequency count in a quadrat but only that a species occurred in the quadrat. If only occurrence data are available then the two parameters of the negative binomial distribution, the aggregation index and the mean, are not identifiable. This can be overcome by data augmentation or through modeling the dependence between quadrat occupancies. Here, we propose to record the (first) detection time while collecting occurrence data in a quadrat. We show that under what we call proportionate sampling, where the time to survey a region is proportional to the area of the region, that both negative binomial parameters are estimable. When the mean parameter is larger than two, our proposed approach is more efficient than the data augmentation method developed by Solow and Smith (, Am. Nat. 176, 96-98), and in general is cheaper to conduct. We also investigate the effect of misidentification when collecting negative binomially distributed data, and conclude that, in general, the effect can be simply adjusted for provided that the mean and variance of misidentification probabilities are known. The results are demonstrated in a simulation study and illustrated in several real examples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Deborah Carvalho Malta
Full Text Available ABSTRACT CONTEXT AND OBJECTIVE: Noncommunicable diseases (NCDs are the leading health problem globally and generate high numbers of premature deaths and loss of quality of life. The aim here was to describe the major groups of causes of death due to NCDs and the ranking of the leading causes of premature death between 1990 and 2015, according to the Global Burden of Disease (GBD 2015 study estimates for Brazil. DESIGN AND SETTING: Cross-sectional study covering Brazil and its 27 federal states. METHODS: This was a descriptive study on rates of mortality due to NCDs, with corrections for garbage codes and underreporting of deaths. RESULTS: This study shows the epidemiological transition in Brazil between 1990 and 2015, with increasing proportional mortality due to NCDs, followed by violence, and decreasing mortality due to communicable, maternal and neonatal causes within the global burden of diseases. NCDs had the highest mortality rates over the whole period, but with reductions in cardiovascular diseases, chronic respiratory diseases and cancer. Diabetes increased over this period. NCDs were the leading causes of premature death (30 to 69 years: ischemic heart diseases and cerebrovascular diseases, followed by interpersonal violence, traffic injuries and HIV/AIDS. CONCLUSION: The decline in mortality due to NCDs confirms that improvements in disease control have been achieved in Brazil. Nonetheless, the high mortality due to violence is a warning sign. Through maintaining the current decline in NCDs, Brazil should meet the target of 25% reduction proposed by the World Health Organization by 2025.
Directory of Open Access Journals (Sweden)
J. Ryan. Zimmerling
2013-12-01
Full Text Available We estimated impacts on birds from the development and operation of wind turbines in Canada considering both mortality due to collisions and loss of nesting habitat. We estimated collision mortality using data from carcass searches for 43 wind farms, incorporating correction factors for scavenger removal, searcher efficiency, and carcasses that fell beyond the area searched. On average, 8.2 ± 1.4 birds (95% C.I. were killed per turbine per year at these sites, although the numbers at individual wind farms varied from 0 - 26.9 birds per turbine per year. Based on 2955 installed turbines (the number installed in Canada by December 2011, an estimated 23,300 birds (95% C.I. 20,000 - 28,300 would be killed from collisions with turbines each year. We estimated direct habitat loss based on data from 32 wind farms in Canada. On average, total habitat loss per turbine was 1.23 ha, which corresponds to an estimated total habitat loss due to wind farms nationwide of 3635 ha. Based on published estimates of nest density, this could represent habitat for ~5700 nests of all species. Assuming nearby habitats are saturated, and 2 adults displaced per nest site, effects of direct habitat loss are less than that of direct mortality. Installed wind capacity is growing rapidly, and is predicted to increase more than 10-fold over the next 10-15 years, which could lead to direct mortality of approximately 233,000 birds / year, and displacement of 57,000 pairs. Despite concerns about the impacts of biased correction factors on the accuracy of mortality estimates, these values are likely much lower than those from collisions with some other anthropogenic sources such as windows, vehicles, or towers, or habitat loss due to many other forms of development. Species composition data suggest that < 0.2% of the population of any species is currently affected by mortality or displacement from wind turbine development. Therefore, population level impacts are unlikely
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
The Hubble constant estimation using 18 gravitational lensing time delays
Jaelani, Anton T.; Premadi, Premana W.
2014-03-01
Gravitational lens time delay method has been used to estimate the rate of cosmological expansion, called the Hubble constant, H0, independently of the standard candle method. This gravitational lensing method requires a good knowledge of the lens mass distribution, reconstructed using the lens image properties. The observed positions of the images, and the redshifts of the lens and the images serve as strong constraints to the lens equations, which are then solved as a set of simultaneous linear equations. Here we made use of a non-parametric technique to reconstruct the lens mass distribution, which is manifested in a linear equations solver named PixeLens. Input for the calculation is chosen based on prior known parameters obtained from analyzed result of the lens case observations, including time-delay, position angles of the images and the lens, and their redshifts. In this project, 18 fairly well studied lens cases are further grouped according to a number of common properties to examine how each property affects the character of the data, and therefore affects the calculation of H0. The considered lens case properties are lens morphology, number of image, completeness of time delays, and symmetry of lens mass distribution. Analysis of simulation shows that paucity of constraints on mass distribution of a lens yields wide range value of H0, which reflects the uniqueness of each lens system. Nonetheless, gravitational lens method still yields H0 within an acceptable range of value when compared to those determined by many other methods. Grouping the cases in the above manner allowed us to assess the robustness of PixeLens and thereby use it selectively. In addition, we use glafic, a parametric mass reconstruction solver, to refine the mass distribution of one lens case, as a comparison.
Between-Trial Forgetting Due to Interference and Time in Motor Adaptation.
Directory of Open Access Journals (Sweden)
Sungshin Kim
Full Text Available Learning a motor task with temporally spaced presentations or with other tasks intermixed between presentations reduces performance during training, but can enhance retention post training. These two effects are known as the spacing and contextual interference effect, respectively. Here, we aimed at testing a unifying hypothesis of the spacing and contextual interference effects in visuomotor adaptation, according to which forgetting between trials due to either spaced presentations or interference by another task will promote between-trial forgetting, which will depress performance during acquisition, but will promote retention. We first performed an experiment with three visuomotor adaptation conditions: a short inter-trial-interval (ITI condition (SHORT-ITI; a long ITI condition (LONG-ITI; and an alternating condition with two alternated opposite tasks (ALT, with the same single-task ITI as in LONG-ITI. In the SHORT-ITI condition, there was fastest increase in performance during training and largest immediate forgetting in the retention tests. In contrast, in the ALT condition, there was slowest increase in performance during training and little immediate forgetting in the retention tests. Compared to these two conditions, in the LONG-ITI, we found intermediate increase in performance during training and intermediate immediate forgetting. To account for these results, we fitted to the data six possible adaptation models with one or two time scales, and with interference in the fast, or in the slow, or in both time scales. Model comparison confirmed that two time scales and some degree of interferences in either time scale are needed to account for our experimental results. In summary, our results suggest that retention following adaptation is modulated by the degree of between-trial forgetting, which is due to time-based decay in single adaptation task and interferences in multiple adaptation tasks.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
Worldwide incidence of malaria in 2009: estimates, time trends, and a critique of methods.
Directory of Open Access Journals (Sweden)
Richard E Cibulskis
2011-12-01
Full Text Available BACKGROUND: Measuring progress towards Millennium Development Goal 6, including estimates of, and time trends in, the number of malaria cases, has relied on risk maps constructed from surveys of parasite prevalence, and on routine case reports compiled by health ministries. Here we present a critique of both methods, illustrated with national incidence estimates for 2009. METHODS AND FINDINGS: We compiled information on the number of cases reported by National Malaria Control Programs in 99 countries with ongoing malaria transmission. For 71 countries we estimated the total incidence of Plasmodium falciparum and P. vivax by adjusting the number of reported cases using data on reporting completeness, the proportion of suspects that are parasite-positive, the proportion of confirmed cases due to each Plasmodium species, and the extent to which patients use public sector health facilities. All four factors varied markedly among countries and regions. For 28 African countries with less reliable routine surveillance data, we estimated the number of cases from model-based methods that link measures of malaria transmission with case incidence. In 2009, 98% of cases were due to P. falciparum in Africa and 65% in other regions. There were an estimated 225 million malaria cases (5th-95th centiles, 146-316 million worldwide, 176 (110-248 million in the African region, and 49 (36-68 million elsewhere. Our estimates are lower than other published figures, especially survey-based estimates for non-African countries. CONCLUSIONS: Estimates of malaria incidence derived from routine surveillance data were typically lower than those derived from surveys of parasite prevalence. Carefully interpreted surveillance data can be used to monitor malaria trends in response to control efforts, and to highlight areas where malaria programs and health information systems need to be strengthened. As malaria incidence declines around the world, evaluation of control efforts
Methods of approaching decoherence in the flavor sector due to space-time foam
Mavromatos, N. E.; Sarkar, Sarben
2006-08-01
In the first part of this work we discuss possible effects of stochastic space-time foam configurations of quantum gravity on the propagation of “flavored” (Klein-Gordon and Dirac) neutral particles, such as neutral mesons and neutrinos. The formalism is not the usually assumed Lindblad one, but it is based on random averages of quantum fluctuations of space-time metrics over which the propagation of the matter particles is considered. We arrive at expressions for the respective oscillation probabilities between flavors which are quite distinct from the ones pertaining to Lindblad-type decoherence, including in addition to the (expected) Gaussian decay with time, a modification to oscillation behavior, as well as a power-law cutoff of the time-profile of the respective probability. In the second part we consider space-time foam configurations of quantum-fluctuating charged-black holes as a way of generating (parts of) neutrino mass differences, mimicking appropriately the celebrated Mikheyev-Smirnov-Wolfenstein (MSW) effects of neutrinos in stochastically fluctuating random media. We pay particular attention to disentangling genuine quantum-gravity effects from ordinary effects due to the propagation of a neutrino through ordinary matter. Our results are of interest to precision tests of quantum-gravity models using neutrinos as probes.
Wang, Tianyang; Jerrett, Michael; Sinsheimer, Peter; Zhu, Yifang
2016-11-01
The Volkswagen Group of America (VW) was found by the US Environmental Protection Agency (EPA) and the California Air Resources Board (CARB) to have installed "defeat devices" and emit more oxides of nitrogen (NOx) than permitted under current EPA standards. In this paper, we quantify the hidden NOx emissions from this so-called VW scandal and the resulting public health impacts in California. The NOx emissions are calculated based on VW road test data and the CARB Emission Factors (EMFAC) model. Cumulative hidden NOx emissions from 2009 to 2015 were estimated to be over 3500 tons. Adult mortality changes were estimated based on ambient fine particulate matter (PM2.5) change due to secondary nitrate formation and the related concentration-response functions. We estimated that hidden NOx emissions from 2009 to 2015 have resulted in a total of 12 PM2.5-associated adult mortality increases in California. Most of the mortality increase happened in metropolitan areas, due to their high population and vehicle density.
International Nuclear Information System (INIS)
Yoshioka, Katsuhiro
1994-01-01
An empirical equation was deduced from studies of time variations of terrestrial gamma exposure rate and soil moisture content with depth distribution in the surface layer. It was definitely suggested that the variation of terrestrial gamma exposure rate is most strongly influenced by the change of soil moisture content at 5 cm depth. The seasonal variation with a relative maximum in early autumn and a relative minimum in early spring was clearly obtained in the consequence of long time measurements of terrestrial gamma exposure rate and degree of soil dryness. The diurnal change and phase difference due to the effect of depth were also obtained in the dynamic characteristics of soil moisture content at 3 different depths. From the comparison between measured terrestrial gamma exposure rate and that evaluated from soil moisture content using the empirical equation, it was seen that seasonal variations of the both agreed fairly well as a whole. (author)
Real-time moving horizon estimation for a vibrating active cantilever
Abdollahpouri, Mohammad; Takács, Gergely; Rohaľ-Ilkiv, Boris
2017-03-01
Vibrating structures may be subject to changes throughout their operating lifetime due to a range of environmental and technical factors. These variations can be considered as parameter changes in the dynamic model of the structure, while their online estimates can be utilized in adaptive control strategies, or in structural health monitoring. This paper implements the moving horizon estimation (MHE) algorithm on a low-cost embedded computing device that is jointly observing the dynamic states and parameter variations of an active cantilever beam in real time. The practical behavior of this algorithm has been investigated in various experimental scenarios. It has been found, that for the given field of application, moving horizon estimation converges faster than the extended Kalman filter; moreover, it handles atypical measurement noise, sensor errors or other extreme changes, reliably. Despite its improved performance, the experiments demonstrate that the disadvantage of solving the nonlinear optimization problem in MHE is that it naturally leads to an increase in computational effort.
Arriga, Nicola; Fratini, Gerardo; Forgione, Antonio; Tomassucci, Michele; Papale, Dario
2010-05-01
Eddy covariance is a well established and widely used methodology for the measurement of turbulent fluxes of mass and energy in the atmospheric boundary layer, in particular to estimate CO2/H2O and heat exchange above ecologically relevant surfaces (Aubinet 2000, Baldocchi 2003). Despite its long term application and theoretical studies, many issues are still open about the effect of different experimental set-up on final flux estimates. Open issues are the evaluation of the performances of different kind of sensors (e.g. open path vs closed path infra-red gas analysers, vertical vs horizontal mounting ultrasonic anemometers), the quantification of the impact of corresponding physical corrections to be applied to get robust flux estimates taking in account all processes concurring to the measurement (e.g. the so-called WPL term, signal attenuation due to air sampling system for closed path analyser, relative position of analyser and anemometer) and the differences between several data transmission protocols used (analogue, digital RS-232, SDM). A field experiment was designed to study these issues using several instruments among those most used within the Fluxnet community and to compare their performances under conditions supposed to be critical: rainy and cold weather conditions for open-path analysers (Burba 2008), water transport and absorption at high air relative humidity conditions for closed-path systems (Ibrom, 2007), frequency sampling limits and recorded data robustness due to different transmission protocols (RS232, SDM, USB, Ethernet) and finally the effect of the displacement between anemometer and analyser using at least two identical analysers placed at different horizontal and vertical distances from the anemometer. Aim of this experiment is to quantify the effect of several technical solutions on the final estimates of fluxes measured at a point in the space and if they represent a significant source of uncertainty for mass and energy cycle
Flicker Noise in GNSS Station Position Time Series: How much is due to Crustal Loading Deformations?
Rebischung, P.; Chanard, K.; Metivier, L.; Altamimi, Z.
2017-12-01
The presence of colored noise in GNSS station position time series was detected 20 years ago. It has been shown since then that the background spectrum of non-linear GNSS station position residuals closely follows a power-law process (known as flicker noise, 1/f noise or pink noise), with some white noise taking over at the highest frequencies. However, the origin of the flicker noise present in GNSS station position time series is still unclear. Flicker noise is often described as intrinsic to the GNSS system, i.e. due to errors in the GNSS observations or in their modeling, but no such error source has been identified so far that could explain the level of observed flicker noise, nor its spatial correlation.We investigate another possible contributor to the observed flicker noise, namely real crustal displacements driven by surface mass transports, i.e. non-tidal loading deformations. This study is motivated by the presence of power-law noise in the time series of low-degree (≤ 40) and low-order (≤ 12) Stokes coefficients observed by GRACE - power-law noise might also exist at higher degrees and orders, but obscured by GRACE observational noise. By comparing GNSS station position time series with loading deformation time series derived from GRACE gravity fields, both with their periodic components removed, we therefore assess whether GNSS and GRACE both plausibly observe the same flicker behavior of surface mass transports / loading deformations. Taking into account GRACE observability limitations, we also quantify the amount of flicker noise in GNSS station position time series that could be explained by such flicker loading deformations.
International Nuclear Information System (INIS)
Souto, F.J.; Heger, A.S.
2001-01-01
To investigate the effects of radiolytic gas bubbles and thermal expansion on the steady-state operation of solution reactors at the power level required for the production of medical isotopes, a calculational model has been developed. To validate this model, including its principal hypotheses, specific experiments at the Los Alamos National Laboratory SHEBA uranyl fluoride solution reactor were conducted. The following sections describe radiolytic gas generation in solution reactors, the equations to estimate the fuel solution volume change due to radiolytic gas bubbles and thermal expansion, the experiments conducted at SHEBA, and the comparison of experimental results and model calculations. (author)
do Carmo, Eduardo; Goncalves Hönnicke, Marcelo
2018-05-01
There are different forms to introduce/illustrate the energy concepts for the basic physics students. The explosive seed dispersal mechanism found in a variety of trees could be one of them. Sibipiruna trees carry out fruits (pods) who show such an explosive mechanism. During the explosion, the pods throw out seeds several meters away. In this manuscript we show simple methodologies to estimate the energy amount stored in the Sibipiruna tree due to such a process. Two different physics approaches were used to carry out this study: by monitoring indoor and in situ the explosive seed dispersal mechanism and by measuring the elastic constant of the pod shell. An energy of the order of kJ was found to be stored in a single tree due to such an explosive mechanism.
Smith, Stephen A; Brown, Joseph W; Walker, Joseph F
2018-01-01
lognormal (UCLN) models. However, this overlap was often due to imprecise estimates from the UCLN model. We find that "gene shopping" can be an efficient approach to divergence-time inference for phylogenomic datasets that may otherwise be characterized by extensive gene tree heterogeneity.
Stability over Time of Different Methods of Estimating School Performance
Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu
2014-01-01
This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…
Detection probabilities for time-domain velocity estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1991-01-01
programs, it is demonstrated that the probability of correct estimation depends on the signal-to-noise ratio, transducer bandwidth, number of A-lines and number of samples used in the correlation estimate. The influence of applying a stationary echo-canceler is explained. The echo canceling can be modeled...
Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound
DEFF Research Database (Denmark)
Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.
1996-01-01
of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...
Freni, G; La Loggia, G; Notaro, V
2010-01-01
Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly
Musolino, S V; Greenhouse, N A; Hull, A P
1997-10-01
Estimates of the thyroid absorbed doses due to fallout originating from the 1 March 1954 BRAVO thermonuclear test on Bikini Atoll have been made for several inhabited locations in the Northern Marshall Islands. Rongelap, Utirik, Rongerik and Ailinginae Atolls were also inhabited on 1 March 1954, where retrospective thyroid absorbed doses have previously been reconstructed. The current estimates are based primarily on external exposure data, which were recorded shortly after each nuclear test in the Castle Series, and secondarily on soil concentrations of 137Cs in samples collected in 1978 and 1988, along with aerial monitoring done in 1978. The external exposures and 137Cs soil concentrations were representative of the atmospheric transport and deposition patterns of the entire Castle Series tests and show that the BRAVO test was the major contributor to fallout exposure during the Castle series and other test series which were carried out in the Marshall Islands. These data have been used as surrogates for fission product radioiodines and telluriums in order to estimate the range of thyroid absorbed doses that may have occurred throughout the Marshall Islands. Dosimetry based on these two sets of estimates agreed within a factor of 4 at the locations where BRAVO was the dominant contributor to the total exposure and deposition. Both methods indicate that thyroid absorbed doses in the range of 1 Gy (100 rad) may have been incurred in some of the northern locations, whereas the doses at southern locations did not significantly exceed levels comparable to those from worldwide fallout. The results of these estimates indicate that a systematic medical survey for thyroid disease should be conducted, and that a more definitive dose reconstruction should be made for all the populated atolls and islands in the Northern Marshall Islands beyond Rongelap, Utirik, Rongerik and Ailinginae, which were significantly contaminated by BRAVO fallout.
International Nuclear Information System (INIS)
Musolino, S.V.; Hull, A.P.; Greenhouse, N.A.
1997-01-01
Estimates of the thyroid absorbed doses due to fallout originating from the 1 March 1954 BRAVO thermonuclear test on Bikini Atoll have been made for several inhabited locations in the Northern Marshall Islands. Rongelap, Utirik, Rongerik and Ailinginae Atolls were also inhabited on 1 March 1954, where retrospective thyroid absorbed doses have previously been reconstructed. Current estimates are based primarily on external exposure data, which were recorded shortly after each nuclear test in the Castle Series, and secondarily on soil concentrations of 137 Cs in samples collected in 1978 and 1988, along with aerial monitoring done in 1978. External exposures and 137 Cs Soil concentrations were representative of the atmospheric transport and deposition patterns of the entire Castle Series tests and show that the BRAVO test was the major contributor to fallout exposure during the Castle series and other test series which were carried out in the Marshall Islands. These data have been used as surrogates for fission product radioiodines and telluriums in order to estimate the range of thyroid absorbed doses that may have occurred throughout the Marshall Islands. Dosimetry based on these two sets of estimates agreed within a factor of 4 at the locations where BRAVO was the dominant contributor to the total exposure and deposition. Both methods indicate that thyroid absorbed doses in the range of 1 Gy (100 rad) may have been incurred in some of the northern locations, whereas the doses at southern locations did not significantly exceed levels comparable to those from worldwide fallout. The results of these estimates indicate that a systematic medical survey for thyroid disease should be conducted, and that a more definitive dose reconstruction should be made for all the populated atolls and islands in the Northern Marshall Islands beyond Rongelap, Utirik, Rongerik and Ailinginae, which were significantly contaminated by BRAVO fallout. 30 refs., 2 figs., 10 tabs
Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment
Directory of Open Access Journals (Sweden)
Qi Liu
2016-08-01
Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.
Only through perturbation can relaxation times be estimated
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Lansky, Petr
2012-01-01
Estimation of model parameters is as important as model building, but is often neglected in model studies. Here we show that despite the existence of well known results on parameter estimation in a simple homogenous Ornstein-Uhlenbeck process, in most practical situations the methods suffer greatly...... on computer experiments based on applications in neuroscience and pharmacokinetics, which show a striking improvement of the quality of estimation. The results are important for judicious designs of experiments to obtain maximal information from each data point, especially when samples are expensive...
Time Domain Frequency Stability Estimation Based On FFT Measurements
National Research Council Canada - National Science Library
Chang, P
2004-01-01
.... In this paper, the biases of the Fast Fourier transform (FFT) spectral estimate with Hanning window are checked and the resulting unbiased spectral density are used to calculate the Allan variance...
A model for beta skin dose estimation due to the use of a necklace with uranium depleted bullets
International Nuclear Information System (INIS)
Lavalle Heibron, P.H.; Pérez Guerrero, J.S.; Oliveira, J.F. de
2015-01-01
Depleted uranium bullets were use as munitions during the Kuwait – Iraq war and the International Atomic Energy Agency sampling expert’s team found fragments in the environment when the war was over. Consequently, there is a possibility that members of the public, especially children, collects DU fragments and use it, for example, to make a necklace. This paper estimates the beta skin dose to a child that uses a necklace made with a depleted uranium bullet. The theoretical model for dose estimation is based on Loevinguer’s equation with a correction factor adjusted for the maximum beta energy in the range between 0.1 and 2.5 MeV calculated taking into account the International Atomic Energy Agency expected doses rates in air at one meter distance of a point source of 37 GBq, function of the maximum beta energy. The dose rate estimated by this work due to the child use of a necklace with one depleted uranium bullet of 300 g was in good agreement with other results founded in literature. (authors)
International Nuclear Information System (INIS)
Hunt, J. G.; Nosske, D.; Dos Santos, D. S.
2005-01-01
) due to the incorporation of 1 Bq of a radionuclide by the mother. This information may be used to provide external dose estimates to the infant in the case of a known or suspected radionuclide incorporation by the mother due to, for example, a nuclear medicine procedure. (authors)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Xie, Xiang-Peng; Yue, Dong; Park, Ju H
2018-02-01
The paper provides relaxed designs of fault estimation observer for nonlinear dynamical plants in the Takagi-Sugeno form. Compared with previous theoretical achievements, a modified version of fuzzy fault estimation observer is implemented with the aid of the so-called maximum-priority-based switching law. Given each activated switching status, the appropriate group of designed matrices can be provided so as to explore certain key properties of the considered plants by means of introducing a set of matrix-valued variables. Owing to the reason that more abundant information of the considered plants can be updated in due course and effectively exploited for each time instant, the conservatism of the obtained result is less than previous theoretical achievements and thus the main defect of those existing methods can be overcome to some extent in practice. Finally, comparative simulation studies on the classical nonlinear truck-trailer model are given to certify the benefits of the theoretic achievement which is obtained in our study. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Time-dependent 2-D modeling of edge plasma transport with high intermittency due to blobs
International Nuclear Information System (INIS)
Pigarov, A. Yu.; Krasheninnikov, S. I.; Rognlien, T. D.
2012-01-01
The results on time-dependent 2-D fluid modeling of edge plasmas with non-diffusive intermittent transport across the magnetic field (termed cross-field) based on the novel macro-blob approach are presented. The capability of this approach to simulate the long temporal evolution (∼0.1 s) of the background plasma and simultaneously the fast spatiotemporal dynamics of blobs (∼10 −4 s) is demonstrated. An analysis of a periodic sequence of many macro-blobs (PSMB) is given showing that the resulting plasma attains a dynamic equilibrium. Plasma properties in the dynamic equilibrium are discussed. In PSMB modeling, the effect of macro-blob generation frequency on edge plasma parameters is studied. Comparison between PSMB modeling and experimental profile data is given. The calculations are performed for the same plasma discharge using two different models for anomalous cross-field transport: time-average convection and PSMB. Parametric analysis of edge plasma variation with transport coefficients in these models is presented. The capability of the models to accurately simulate enhanced transport due to blobs is compared. Impurity dynamics in edge plasma with macro-blobs is also studied showing strong impact of macro-blob on profiles of impurity charge states caused by enhanced outward transport of high-charge states and simultaneous inward transport of low-charge states towards the core. Macro-blobs cause enhancement of sputtering rates, increase radiation and impurity concentration in plasma, and change erosion/deposition patterns.
DEFF Research Database (Denmark)
Tabatabaeipour, Seyed Mojtaba; Bak, Thomas
2012-01-01
In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...
Deepwater Horizon - Estimating surface oil volume distribution in real time
Lehr, B.; Simecek-Beatty, D.; Leifer, I.
2011-12-01
Spill responders to the Deepwater Horizon (DWH) oil spill required both the relative spatial distribution and total oil volume of the surface oil. The former was needed on a daily basis to plan and direct local surface recovery and treatment operations. The latter was needed less frequently to provide information for strategic response planning. Unfortunately, the standard spill observation methods were inadequate for an oil spill this size, and new, experimental, methods, were not ready to meet the operational demands of near real-time results. Traditional surface oil estimation tools for large spills include satellite-based sensors to define the spatial extent (but not thickness) of the oil, complemented with trained observers in small aircraft, sometimes supplemented by active or passive remote sensing equipment, to determine surface percent coverage of the 'thick' part of the slick, where the vast majority of the surface oil exists. These tools were also applied to DWH in the early days of the spill but the shear size of the spill prevented synoptic information of the surface slick through the use small aircraft. Also, satellite images of the spill, while large in number, varied considerably in image quality, requiring skilled interpretation of them to identify oil and eliminate false positives. Qualified staff to perform this task were soon in short supply. However, large spills are often events that overcome organizational inertia to the use of new technology. Two prime examples in DWH were the application of hyper-spectral scans from a high-altitude aircraft and more traditional fixed-wing aircraft using multi-spectral scans processed by use of a neural network to determine, respectively, absolute or relative oil thickness. But, with new technology, come new challenges. The hyper-spectral instrument required special viewing conditions that were not present on a daily basis and analysis infrastructure to process the data that was not available at the command
Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil
2016-06-01
An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Real-time Loudspeaker Distance Estimation with Stereo Audio
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Gaubitch, Nikolay; Heusdens, Richard
2015-01-01
Knowledge on how a number of loudspeakers are positioned relative to a listening position can be used to enhance the listening experience. Usually, these loudspeaker positions are estimated using calibration signals, either audible or psycho-acoustically hidden inside the desired audio signal...
International Nuclear Information System (INIS)
Sathyapriya, R.S.; Suma Nair; Prabhath, R.K.; Madhu Nair; Rao, D.D.
2012-01-01
A study was conducted to estimate the thorium concentration in locally grown vegetables in high background radiation area (HBRA) of southern coastal regions of India. Locally grown vegetables were collected from HBRA of southern coastal regions of India. Thorium concentration was quantified using instrumental neutron activation analysis. The samples were irradiated at CIRUS reactor and counted using a 40% relative efficiency HPGe detector coupled to MCA. The annual intake of thorium was evaluated using the consumption data provided by National Nutrition Monitoring Board. The daily intake of 232 Th from the four food categories (green leafy vegetables, others vegetables, roots and tubers, and fruits) ranged between 0.27 and 5.352 mBq d -1 . The annual internal dose due to ingestion of thorium from these food categories was 46.8 x 10 -8 for female and 58.6 x 10 -8 Sv y -1 for male. (author)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-10-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
International Nuclear Information System (INIS)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-01-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
Directory of Open Access Journals (Sweden)
Xiuli Wu
2018-03-01
Full Text Available Renewable energy is an alternative to non-renewable energy to reduce the carbon footprint of manufacturing systems. Finding out how to make an alternative energy-efficient scheduling solution when renewable and non-renewable energy drives production is of great importance. In this paper, a multi-objective flexible flow shop scheduling problem that considers variable processing time due to renewable energy (MFFSP-VPTRE is studied. First, the optimization model of the MFFSP-VPTRE is formulated considering the periodicity of renewable energy and the limitations of energy storage capacity. Then, a hybrid non-dominated sorting genetic algorithm with variable local search (HNSGA-II is proposed to solve the MFFSP-VPTRE. An operation and machine-based encoding method is employed. A low-carbon scheduling algorithm is presented. Besides the crossover and mutation, a variable local search is used to improve the offspring’s Pareto set. The offspring and the parents are combined and those that dominate more are selected to continue evolving. Finally, two groups of experiments are carried out. The results show that the low-carbon scheduling algorithm can effectively reduce the carbon footprint under the premise of makespan optimization and the HNSGA-II outperforms the traditional NSGA-II and can solve the MFFSP-VPTRE effectively and efficiently.
International Nuclear Information System (INIS)
Albers, D.J.; Hripcsak, George
2012-01-01
Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.
International Nuclear Information System (INIS)
Kumaresan, M.; Chaubey, Ajay; Kantharia, Surita; Karira, V.; Kumar, Rajesh; Biju, K.; Rao, B.S.
2006-01-01
Organ dose, risk of carcinogenesis and genetic effect due to the abdominal region radiography in Indian adult with the help of Monte-Carlo MCNP code by measuring the entrance skin dose by LiF: Mg, Cu, P TL phosphor and the risk coefficients provided by ICRP 60 were estimated. The entrance skin dose for abdominal region radiography was ranges from 2.75 mSv to 18.88 mSv while average entrance skin dose was 8.3 mSv. The bladder, testes and ovary are the important organ those are getting higher dose. The maximum dose for testes, ovary and bladder is 5.37 mSv, 1.45 mSv and 4.74 mSv respectively. The frequency of occurrence of fatal cancers and serious genetic disorders as a consequence of abdominal region radiography ranges from 0.1 to 38.8 risk/10 6 of fatal cancer. Although the estimated risks are small but cannot be neglected. It is important to avoid unnecessary repetitions and also to carry out proper quality assurance tests on the equipment and in the long run it will help reduce the risks and maximize the benefits of radiodiagnosis. These studies may lead to setting up of national reference levels for the diagnostic procedures India. (author)
Egleston, Brian L.; Scharfstein, Daniel O.; MacKenzie, Ellen
2008-01-01
We focus on estimation of the causal effect of treatment on the functional status of individuals at a fixed point in time t* after they have experienced a catastrophic event, from observational data with the following features: (1) treatment is imposed shortly after the event and is non-randomized, (2) individuals who survive to t* are scheduled to be interviewed, (3) there is interview non-response, (4) individuals who die prior to t* are missing information on pre-event confounders, (5) medical records are abstracted on all individuals to obtain information on post-event, pre-treatment confounding factors. To address the issue of survivor bias, we seek to estimate the survivor average causal effect (SACE), the effect of treatment on functional status among the cohort of individuals who would survive to t* regardless of whether or not assigned to treatment. To estimate this effect from observational data, we need to impose untestable assumptions, which depend on the collection of all confounding factors. Since pre-event information is missing on those who die prior to t*, it is unlikely that these data are missing at random (MAR). We introduce a sensitivity analysis methodology to evaluate the robustness of SACE inferences to deviations from the MAR assumption. We apply our methodology to the evaluation of the effect of trauma center care on vitality outcomes using data from the National Study on Costs and Outcomes of Trauma Care. PMID:18759833
Anthropogenic CO2 in the oceans estimated using transit time distributions
International Nuclear Information System (INIS)
Waugh, D.W.; McNeil, B.I.
2006-01-01
The distribution of anthropogenic carbon (Cant) in the oceans is estimated using the transit time distribution (TTD) method applied to global measurements of chlorofluorocarbon-12 (CFC12). Unlike most other inference methods, the TTD method does not assume a single ventilation time and avoids the large uncertainty incurred by attempts to correct for the large natural carbon background in dissolved inorganic carbon measurements. The highest concentrations and deepest penetration of anthropogenic carbon are found in the North Atlantic and Southern Oceans. The estimated total inventory in 1994 is 134 Pg-C. To evaluate uncertainties the TTD method is applied to output from an ocean general circulation model (OGCM) and compared the results to the directly simulated Cant. Outside of the Southern Ocean the predicted Cant closely matches the directly simulated distribution, but in the Southern Ocean the TTD concentrations are biased high due to the assumption of 'constant disequilibrium'. The net result is a TTD overestimate of the global inventory by about 20%. Accounting for this bias and other centred uncertainties, an inventory range of 94-121 Pg-C is obtained. This agrees with the inventory of Sabine et al., who applied the DeltaC* method to the same data. There are, however, significant differences in the spatial distributions: The TTD estimates are smaller than DeltaC* in the upper ocean and larger at depth, consistent with biases expected in DeltaC* given its assumption of a single parcel ventilation time
Effect of parameter calculation in direct estimation of the Lyapunov exponent in short time series
Directory of Open Access Journals (Sweden)
A. M. López Jiménez
2002-01-01
Full Text Available The literature about non-linear dynamics offers a few recommendations, which sometimes are divergent, about the criteria to be used in order to select the optimal calculus parameters in the estimation of Lyapunov exponents by direct methods. These few recommendations are circumscribed to the analysis of chaotic systems. We have found no recommendation for the estimation of λ starting from the time series of classic systems. The reason for this is the interest in distinguishing variability due to a chaotic behavior of determinist dynamic systems of variability caused by white noise or linear stochastic processes, and less in the identification of non-linear terms from the analysis of time series. In this study we have centered in the dependence of the Lyapunov exponent, obtained by means of direct estimation, of the initial distance and the time evolution. We have used generated series of chaotic systems and generated series of classic systems with varying complexity. To generate the series we have used the logistic map.
Estimation of time-varying growth, uptake and excretion rates from dynamic metabolomics data.
Cinquemani, Eugenio; Laroute, Valérie; Cocaign-Bousquet, Muriel; de Jong, Hidde; Ropers, Delphine
2017-07-15
Technological advances in metabolomics have made it possible to monitor the concentration of extracellular metabolites over time. From these data, it is possible to compute the rates of uptake and excretion of the metabolites by a growing cell population, providing precious information on the functioning of intracellular metabolism. The computation of the rate of these exchange reactions, however, is difficult to achieve in practice for a number of reasons, notably noisy measurements, correlations between the concentration profiles of the different extracellular metabolites, and discontinuties in the profiles due to sudden changes in metabolic regime. We present a method for precisely estimating time-varying uptake and excretion rates from time-series measurements of extracellular metabolite concentrations, specifically addressing all of the above issues. The estimation problem is formulated in a regularized Bayesian framework and solved by a combination of extended Kalman filtering and smoothing. The method is shown to improve upon methods based on spline smoothing of the data. Moreover, when applied to two actual datasets, the method recovers known features of overflow metabolism in Escherichia coli and Lactococcus lactis , and provides evidence for acetate uptake by L. lactis after glucose exhaustion. The results raise interesting perspectives for further work on rate estimation from measurements of intracellular metabolites. The Matlab code for the estimation method is available for download at https://team.inria.fr/ibis/rate-estimation-software/ , together with the datasets. eugenio.cinquemani@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Estimation of time-dependent input from neuronal membrane potential
Czech Academy of Sciences Publication Activity Database
Kobayashi, R.; Shinomoto, S.; Lánský, Petr
2011-01-01
Roč. 23, č. 12 (2011), s. 3070-3093 ISSN 0899-7667 R&D Projects: GA MŠk(CZ) LC554; GA ČR(CZ) GAP103/11/0282 Institutional research plan: CEZ:AV0Z50110509 Keywords : neuronal coding * statistical estimation * Bayes method Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.884, year: 2011
International Nuclear Information System (INIS)
Kovacs, Tibor; Somlai, Janos; Nagy, Katalin; Szeiler, Gabor
2007-01-01
It is known that tobacco leaves may contain 210 Pb and 210 Po in significant concentrations. The cumulative alpha-radiation dose due to the radioactive content of inhaled cigarette smoke and the increasing number of lung cancer cases explain the importance of the investigation. The present study investigated the activity concentrations of these two radionuclides in 29 Hungarian cigarette samples. The relation between 210 Po/ 210 Pb activity and nicotine/tar content of these cigarettes was also examined. 210 Po was determined by alpha spectrometry using a PIPS detector after chemical leaching and spontaneous deposition of 210 Po on a high nickel-content (25%) stainless steel disk. The 210 Pb activity was calculated from the 210 Po originated from the decay of 210 Pb after a waiting period of eight months. The 210 Po activity concentrations of the measured types of cigarettes ranged from 10.0 to 33.5 mBq/cigarette, and the activity of 210 Pb varied from 9.6 to 32.5 mBq/cigarette. The average annual committed effective dose is estimated to be 185.6±70.6μSv/y and 58.7±22.7μSv/y due to cigarette smoking (20 cigarettes/day) for 210 Po and 210 Pb, respectively
Directory of Open Access Journals (Sweden)
Yong-Sheng Zhang
2015-01-01
Full Text Available The walking, waiting, transfer, and delayed in-vehicle travel times mainly contribute to route’s travel time reliability in the metro system. The automatic fare collection (AFC system provides huge amounts of smart card records which can be used to estimate all these times distributions. A new estimation model based on Bayesian inference formulation is proposed in this paper by integrating the probability measurement of the OD pair with only one effective route, in which all kinds of times follow the truncated normal distributions. Then, Markov Chain Monte Carlo method is designed to estimate all parameters endogenously. Finally, based on AFC data in Guangzhou Metro, the estimations show that all parameters can be estimated endogenously and identifiably. Meanwhile, the truncated property of the travel time is significant and the threshold tested by the surveyed data is reliable. Furthermore, the superiority of the proposed model over the existing model in estimation and forecasting accuracy is also demonstrated.
EEG phase reset due to auditory attention: an inverse time-scale approach
International Nuclear Information System (INIS)
Low, Yin Fen; Strauss, Daniel J
2009-01-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6–10 Hz, termed as theta–alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta–alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis
EEG phase reset due to auditory attention: an inverse time-scale approach.
Low, Yin Fen; Strauss, Daniel J
2009-08-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.
Estimating time-dependent connectivity in marine systems
Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo
2016-01-01
Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.
Overcoming equifinality: Leveraging long time series for stream metabolism estimation
Appling, Alison; Hall, Robert O.; Yackulic, Charles B.; Arroita, Maite
2018-01-01
The foundational ecosystem processes of gross primary production (GPP) and ecosystem respiration (ER) cannot be measured directly but can be modeled in aquatic ecosystems from subdaily patterns of oxygen (O2) concentrations. Because rivers and streams constantly exchange O2 with the atmosphere, models must either use empirical estimates of the gas exchange rate coefficient (K600) or solve for all three parameters (GPP, ER, and K600) simultaneously. Empirical measurements of K600 require substantial field work and can still be inaccurate. Three-parameter models have suffered from equifinality, where good fits to O2 data are achieved by many different parameter values, some unrealistic. We developed a new three-parameter, multiday model that ensures similar values for K600 among days with similar physical conditions (e.g., discharge). Our new model overcomes the equifinality problem by (1) flexibly relating K600 to discharge while permitting moderate daily deviations and (2) avoiding the oft-violated assumption that residuals in O2 predictions are uncorrelated. We implemented this hierarchical state-space model and several competitor models in an open-source R package, streamMetabolizer. We then tested the models against both simulated and field data. Our new model reduces error by as much as 70% in daily estimates of K600, GPP, and ER. Further, accuracy benefits of multiday data sets require as few as 3 days of data. This approach facilitates more accurate metabolism estimates for more streams and days, enabling researchers to better quantify carbon fluxes, compare streams by their metabolic regimes, and investigate controls on aquatic activity.
International Nuclear Information System (INIS)
Rabitsch, H.; Kahr, G.
1991-07-01
During the first months following the fallout we have measured the activities of J-131 in some human thyroids. To study the long-term variation of radiocesium with time, we have observed the activity level of Cs-134 and Cs-137 in human muscle tissues over a period of 4 years. Simultaneously we have determined the activities of the naturally occuring potassium-40 in all samples, which were taken at forensic autopsies of persons deceased in the area of Graz. Comparisons of iodine and radiocesium activities measured in the samples with data obtained by other studies after nuclear weapon tests are given. Average individual thyroid dose was calculated to be 556.1 μSv. The main part of this thyroid dose is caused by the inhalation pathway. Effective individual dose equivalents originated by the radiocesium body content were calculated by means of time-integrated activities and the method of absorbed fractions. Dose estimations based on data of the Standard Man and a distribution factor of 0.7 was assumed with regards to the amount of radiocesium and K-40 in muscle mass. From the measurements, we have estimated a mean individual effective dose equivalent of 252.2 μSv due to internal exposure to radiocesium during the 4 years following the fallout. Estimated dose values are compared with predictions and the exposure caused by K-40. (Authors, shortened by Quittner)
International Nuclear Information System (INIS)
Endo, S.; Kimura, S.; Takatsuji, T.; Nanasawa, K.; Imanaka, T.; Shizuma, K.
2012-01-01
Soil sampling was carried out at an early stage of the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident. Samples were taken from areas around FDNPP, at four locations northwest of FDNPP, at four schools and in four cities, including Fukushima City. Radioactive contaminants in soil samples were identified and measured by using a Ge detector and included 129m Te, 129 Te, 131 I, 132 Te, 132 I, 134 Cs, 136 Cs, 137 Cs, 140 Ba and 140 La. The highest soil depositions were measured to the northwest of FDNPP. From this soil deposition data, variations in dose rates over time and the cumulative external doses at the locations for 3 months and 1 y after deposition were estimated. At locations northwest of FDNPP, the external dose rate at 3 months after deposition was 4.8–98 μSv/h and the cumulative dose for 1 y was 51 to 1.0 × 10 3 mSv; the highest values were at Futaba Yamada. At the four schools, which were used as evacuation shelters, and in the four urban cities, the external dose rate at 3 months after deposition ranged from 0.03 to 3.8 μSv/h and the cumulative doses for 1 y ranged from 3 to 40 mSv. The cumulative dose at Fukushima Niihama Park was estimated as the highest in the four cities. The estimated external dose rates and cumulative doses show that careful countermeasures and remediation will be needed as a result of the accident, and detailed measurements of radionuclide deposition densities in soil will be important input data to conduct these activities.
Estimation of sojourn time in chronic disease screening without data on interval cases.
Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W
2000-03-01
Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.
Visser, H.; Molenaar, J.
1995-05-01
The detection of trends in climatological data has become central to the discussion on climate change due to the enhanced greenhouse effect. To prove detection, a method is needed (i) to make inferences on significant rises or declines in trends, (ii) to take into account natural variability in climate series, and (iii) to compare output from GCMs with the trends in observed climate data. To meet these requirements, flexible mathematical tools are needed. A structural time series model is proposed with which a stochastic trend, a deterministic trend, and regression coefficients can be estimated simultaneously. The stochastic trend component is described using the class of ARIMA models. The regression component is assumed to be linear. However, the regression coefficients corresponding with the explanatory variables may be time dependent to validate this assumption. The mathematical technique used to estimate this trend-regression model is the Kaiman filter. The main features of the filter are discussed.Examples of trend estimation are given using annual mean temperatures at a single station in the Netherlands (1706-1990) and annual mean temperatures at Northern Hemisphere land stations (1851-1990). The inclusion of explanatory variables is shown by regressing the latter temperature series on four variables: Southern Oscillation index (SOI), volcanic dust index (VDI), sunspot numbers (SSN), and a simulated temperature signal, induced by increasing greenhouse gases (GHG). In all analyses, the influence of SSN on global temperatures is found to be negligible. The correlations between temperatures and SOI and VDI appear to be negative. For SOI, this correlation is significant, but for VDI it is not, probably because of a lack of volcanic eruptions during the sample period. The relation between temperatures and GHG is positive, which is in agreement with the hypothesis of a warming climate because of increasing levels of greenhouse gases. The prediction performance of
Real-Time Tropospheric Delay Estimation using IGS Products
Stürze, Andrea; Liu, Sha; Söhne, Wolfgang
2014-05-01
The Federal Agency for Cartography and Geodesy (BKG) routinely provides zenith tropospheric delay (ZTD) parameter for the assimilation in numerical weather models since more than 10 years. Up to now the results flowing into the EUREF Permanent Network (EPN) or E-GVAP (EUMETNET EIG GNSS water vapour programme) analysis are based on batch processing of GPS+GLONASS observations in differential network mode. For the recently started COST Action ES1206 about "Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate" (GNSS4SWEC), however, rapid updates in the analysis of the atmospheric state for nowcasting applications require changing the processing strategy towards real-time. In the RTCM SC104 (Radio Technical Commission for Maritime Services, Special Committee 104) a format combining the advantages of Precise Point Positioning (PPP) and Real-Time Kinematic (RTK) is under development. The so-called State Space Representation approach is defining corrections, which will be transferred in real-time to the user e.g. via NTRIP (Network Transport of RTCM via Internet Protocol). Meanwhile messages for precise orbits, satellite clocks and code biases compatible to the basic PPP mode using IGS products are defined. Consequently, the IGS Real-Time Service (RTS) was launched in 2013 in order to extend the well-known precise orbit and clock products by a real-time component. Further messages e.g. with respect to ionosphere or phase biases are foreseen. Depending on the level of refinement, so different accuracies up to the RTK level shall be reachable. In co-operation of BKG and the Technical University of Darmstadt the real-time software GEMon (GREF EUREF Monitoring) is under development. GEMon is able to process GPS and GLONASS observation and RTS product data streams in PPP mode. Furthermore, several state-of-the-art troposphere models, for example based on numerical weather prediction data, are implemented. Hence, it
Ying Ouyang; Theodor D. Leininger; Jeff Hatten
2013-01-01
Elevated phosphorus (P) in surface waters can cause eutrophication of aquatic ecosystems and can impair water for drinking, industry, agriculture, and recreation. Currently, no effort has been devoted to estimating real-time variation and load of total P (TP) in surface waters due to the lack of suitable and/or cost-effective wireless sensors. However, when considering...
Beyond Newton's Law of Cooling--Estimation of Time since Death
Leinbach, Carl
2011-01-01
The estimate of the time since death and, thus, the time of death is strictly that, an estimate. However, the time of death can be an important piece of information in some coroner's cases, especially those that involve criminal or insurance investigations. It has been known almost from the beginning of time that bodies cool after the internal…
time of arrival 3-d position estimation using minimum ads-b receiver ...
African Journals Online (AJOL)
HOD
The location from which a signal is transmitted can be estimated using the time it takes to be detected at a receiver. The difference between transmission time and the detection time is known as time of arrival (TOA). In this work, an algorithm for 3-dimensional (3-D) position estimation (PE) of an emitter using the minimum ...
The generalized correlation method for estimation of time delay in power plants
International Nuclear Information System (INIS)
Kostic, Lj.
1981-01-01
The generalized correlation estimation is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise in a power plant. This estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. (author)
Estimation of functional preparedness of young handballers in setup time
Directory of Open Access Journals (Sweden)
Favoritоv V.N.
2012-11-01
Full Text Available The dynamics of level of functional preparedness of young handballers in setup time is shown. It was foreseen to make alteration in educational-training process with the purpose of optimization of their functional preparedness. 11 youths were plugged in research, calendar age 14 - 15 years. For determination of level of their functional preparedness the computer program "SVSM" was applied. It is set that at the beginning of setup time of 18,18% of all respondent functional preparedness is characterized by a "middle" level, 27,27% - below the "average", 54,54% - "above" the average. At the end of setup time among sportsmen representatives prevailed with the level of functional preparedness "above" average - 63,63%, with level "high" - 27,27%, sportsmen with level below the average were not observed. Efficiency of the offered system of trainings employments for optimization of functional preparedness of young handballers is well-proven.
Time and space variability of spectral estimates of atmospheric pressure
Canavero, Flavio G.; Einaudi, Franco
1987-01-01
The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.
Correcting bias in commercial CPUE time series due to response of mixed fisheries to management
Quirijns, F.J.; Rijnsdorp, A.D.
2005-01-01
Catch per Unit Effort (CPUE) is an important source of information on the development of fish stocks. To get an unbiased estimate of CPUE one of the issues that need to be investigated is the effect of the response of a fleet to management measures. This paper deals with the effect of the response
Real-time Wind Profile Estimation using Airborne Sensors
In 't Veld, A.C.; De Jong, P.M.A.; Van Paassen, M.M.; Mulder, M.
2011-01-01
Wind is one of the major contributors to uncertainty in continuous descent approach operations. Especially when aircraft that are flying low or idle thrust approaches are issued a required time of arrival over the runway threshold, as is foreseen in some of the future ATC scenarios, the on-board
Estimating epidemic arrival times using linear spreading theory
Chen, Lawrence M.; Holzer, Matt; Shapiro, Anne
2018-01-01
We study the dynamics of a spatially structured model of worldwide epidemics and formulate predictions for arrival times of the disease at any city in the network. The model is composed of a system of ordinary differential equations describing a meta-population susceptible-infected-recovered compartmental model defined on a network where each node represents a city and the edges represent the flight paths connecting cities. Making use of the linear determinacy of the system, we consider spreading speeds and arrival times in the system linearized about the unstable disease free state and compare these to arrival times in the nonlinear system. Two predictions are presented. The first is based upon expansion of the heat kernel for the linearized system. The second assumes that the dominant transmission pathway between any two cities can be approximated by a one dimensional lattice or a homogeneous tree and gives a uniform prediction for arrival times independent of the specific network features. We test these predictions on a real network describing worldwide airline traffic.
A simple data fusion method for instantaneous travel time estimation
Do, Michael; Pueboobpaphan, R.; Miska, Marc; Kuwahara, Masao; van Arem, Bart; Viegas, J.M.; Macario, R.
2010-01-01
Travel time is one of the most understandable parameters to describe traffic condition and an important input to many intelligent transportation systems applications. Direct measurement from Electronic Toll Collection (ETC) system is promising but the data arrives too late, only after the vehicles
Rezende, Daniela; Melo, José W S; Oliveira, José E M; Gondim, Manoel G C
2016-07-01
Reducing the losses caused by Aceria guerreronis Keifer has been an arduous task for farmers. However, there are no detailed studies on losses that simultaneously analyse correlated parameters, and very few studies that address the economic viability of chemical control, the main strategy for managing this pest. In this study the objectives were (1) to estimate the crop loss due to coconut mite and (2) to perform a financial analysis of acaricide application to control the pest. For this, the following parameters were evaluated: number and weight of fruits, liquid albumen volume, and market destination of plants with and without monthly abamectin spraying (three harvests). The costs involved in the chemical control of A. guerreronis were also quantified. Higher A. guerreronis incidence on plants resulted in a 60 % decrease in the mean number of fruits harvested per bunch and a 28 % decrease in liquid albumen volume. Mean fruit weight remained unaffected. The market destination of the harvested fruit was also affected by higher A. guerreronis incidence. Untreated plants, with higher A. guerreronis infestation intensity, produced a lower proportion of fruit intended for fresh market and higher proportions of non-marketable fruit and fruit intended for industrial processing. Despite the costs involved in controlling A. guerreronis, the difference between the profit from the treated site and the untreated site was 18,123.50 Brazilian Real; this value represents 69.1 % higher profit at the treated site.
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Estimating retention potential of headwater catchment using Tritium time series
Hofmann, Harald; Cartwright, Ian; Morgenstern, Uwe
2018-06-01
Headwater catchments provide substantial streamflow to rivers even during long periods of drought. Documenting the mean transit times (MTT) of stream water in headwater catchments and therefore the retention capacities of these catchments is crucial for water management. This study uses time series of 3H activities in combination with major ion concentrations, stable isotope ratios and radon activities (222Rn) in the Lyrebird Creek catchment in Victoria, Australia to provide a unique insight into the mean transit time distributions and flow systems of this small temperate headwater catchment. At all streamflows, the stream has 3H activities (water in the stream is derived from stores with long transit times. If the water in the catchment can be represented by a single store with a continuum of ages, mean transit times of the stream water range from ∼6 up to 40 years, which indicates the large retention potential for this catchment. Alternatively, variations of 3H activities, stable isotopes and major ions can be explained by mixing between of young recent recharge and older water stored in the catchment. While surface runoff is negligible, the variation in stable isotope ratios, major ion concentrations and radon activities during most of the year is minimal (±12%) and only occurs during major storm events. This suggests that different subsurface water stores are activated during the storm events and that these cease to provide water to the stream within a few days or weeks after storm events. The stores comprise micro and macropore flow in the soils and saprolite as well as the boundary between the saprolite and the fractured bed rock. Hydrograph separations from three major storm events using Tritium, electrical conductivity and selected major ions as well a δ18O suggest a minimum of 50% baseflow at most flow conditions. We demonstrate that headwater catchments can have a significant storage capacity and that the relationship between long-water stores and
Estimating Time To Complete for ATLAS data transfers
Bogado Garcia, Joaquin Ignacio; The ATLAS collaboration; Monticelli, Fernando
2018-01-01
Transfer Time To Complete (T³C) is a new extension for the data management system Rucio that allows to make predictions about the duration of a file transfer. The extension has a modular architecture which allows to make predictions based on simple to more sophisticated models, depending on available data and computation power. The ability to predict file transfer times with reasonable accuracy provides a tool for better transfer scheduling and thus reduces both the load on storage systems and the associated networks. The accuracy of the model requires fine tuning for its parameters on a link basis. As the underlying infrastructure varies depending on the source and destination of the transfer, the parameters modelling the network between these sites will also be studied.
Estimation of Curve Tracing Time in Supercapacitor based PV Characterization
Basu Pal, Sudipta; Das Bhattacharya, Konika; Mukherjee, Dipankar; Paul, Debkalyan
2017-08-01
Smooth and noise-free characterisation of photovoltaic (PV) generators have been revisited with renewed interest in view of large size PV arrays making inroads into the urban sector of major developing countries. Such practice has recently been observed to be confronted by the use of a suitable data acquisition system and also the lack of a supporting theoretical analysis to justify the accuracy of curve tracing. However, the use of a selected bank of supercapacitors can mitigate the said problems to a large extent. Assuming a piecewise linear analysis of the V-I characteristics of a PV generator, an accurate analysis of curve plotting time has been possible. The analysis has been extended to consider the effect of equivalent series resistance of the supercapacitor leading to increased accuracy (90-95%) of curve plotting times.
Estimation of Hurst Exponent for the Financial Time Series
Kumar, J.; Manchanda, P.
2009-07-01
Till recently statistical methods and Fourier analysis were employed to study fluctuations in stock markets in general and Indian stock market in particular. However current trend is to apply the concepts of wavelet methodology and Hurst exponent, see for example the work of Manchanda, J. Kumar and Siddiqi, Journal of the Frankline Institute 144 (2007), 613-636 and paper of Cajueiro and B. M. Tabak. Cajueiro and Tabak, Physica A, 2003, have checked the efficiency of emerging markets by computing Hurst component over a time window of 4 years of data. Our goal in the present paper is to understand the dynamics of the Indian stock market. We look for the persistency in the stock market through Hurst exponent and fractal dimension of time series data of BSE 100 and NIFTY 50.
Real-time estimation of wildfire perimeters from curated crowdsourcing
Zhong, Xu; Duckham, Matt; Chong, Derek; Tolhurst, Kevin
2016-04-01
Real-time information about the spatial extents of evolving natural disasters, such as wildfire or flood perimeters, can assist both emergency responders and the general public during an emergency. However, authoritative information sources can suffer from bottlenecks and delays, while user-generated social media data usually lacks the necessary structure and trustworthiness for reliable automated processing. This paper describes and evaluates an automated technique for real-time tracking of wildfire perimeters based on publicly available “curated” crowdsourced data about telephone calls to the emergency services. Our technique is based on established data mining tools, and can be adjusted using a small number of intuitive parameters. Experiments using data from the devastating Black Saturday wildfires (2009) in Victoria, Australia, demonstrate the potential for the technique to detect and track wildfire perimeters automatically, in real time, and with moderate accuracy. Accuracy can be further increased through combination with other authoritative demographic and environmental information, such as population density and dynamic wind fields. These results are also independently validated against data from the more recent 2014 Mickleham-Dalrymple wildfires.
Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.
2017-12-01
Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.
Maxima estimate of non gaussian process from observation of time history samples
International Nuclear Information System (INIS)
Borsoi, L.
1987-01-01
The problem constitutes a formidable task but is essential for industrial applications: extreme value design, fatigue analysis, etc. Even for the linear Gaussian case, the process ergodicity does not prevent the observation duration to be long enough to make reliable estimates. As well known, this duration is closely related to the process autocorrelation. A subterfuge, which distorts a little the problem, consists in considering periodic random process and in adjusting the observation duration to a complete period. In the nonlinear case, the stated problem is as much important as time history simulation is presently the only practicable way for analysing structures. Thus it is always interesting to adjust a tractable model to rough time history observations. In some cases this can be done with a Gumble-Poisson model. Then the difficulty is to make reliable estimates of the parameters involved in the model. Unfortunately it seems that even the use of sophisticated Bayesian method does not permit to reduce as wanted the necessary observation duration. One of the difficulties lies in process ergodicity which is often assumed to be based on physical considerations but which is not always rigorously stated. An other difficulty is the confusion between hidden informations - which can be extracted - and missing informations - which cannot be extracted. Finally it must be recalled that the obligation of considering time histories long enough is not always embarrassing due to the current computer cost reduction. (orig./HP)
Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data
DEFF Research Database (Denmark)
Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh
2013-01-01
triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...
Uncertainty of long-term CO2 flux estimates due to the choice of the spectral correction method
Ibrom, Andreas; Geißler, Simon; Pilegaard, Kim
2010-05-01
The eddy covariance system at the Danish beech forest long-term flux observation site at Sorø has been intensively examined. Here we investigate which systematic and non-systematic effects the choice of the spectral correction method has on long-term net CO2 flux estimates and their components. Ibrom et al. (2007) gave an overview over different ways to correct for low-pass filtering of the atmospheric turbulent signal by a closed path eddy covariance system. They used degraded temperature time series for spectral correction of low-pass filtered signals. In this new study, correction for high-pass filtering was also included, which made it anyway necessary to use model co-spectra. We compared different ways of adapting different kinds of model co-spectra to the wealth of 14 years high frequency raw data. As the trees grew, the distance between the sonic anemometer and the displacement height decreased over time. The study enabled us to compare the two approaches and different variants of them to give recommendations on their use. The analysis showed that model spectra should not be derived from co-spectra between the vertical wind speed (w) and the scalars measured with the closed path system, i.e. CO2 and H20 concentrations, but instead with sonic temperature (T) w cospectra, to avoid low-pass filtering effects on the estimation of the co-spectral peak frequency (fx). This concern was already expressed earlier in the above mentioned study, but here we show the quantitative effects. The wT co-spectra did not show any height effect on fx as it was suggested in generally used parameterizations. A possible reason for this difference is that measurements, like in all forest flux sites, took place in the roughness sub-layer and not in the inertial sub-layer. At the same time the shape of the relationship between fx and the stability parameter ? differed much from that of often used parameterizations (e.g. from Horst, 1997). The shift of fx towards higher frequencies at
Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.
General practice cooperatives: long waiting times for home visits due to long distances?
Giesen, Paul; van Lin, Nieke; Mokkink, Henk; van den Bosch, Wil; Grol, Richard
2007-02-12
The introduction of large-scale out-of-hours GP cooperatives has led to questions about increased distances between the GP cooperatives and the homes of patients and the increasing waiting times for home visits in urgent cases. We studied the relationship between the patient's waiting time for a home visit and the distance to the GP cooperative. Further, we investigated if other factors (traffic intensity, home visit intensity, time of day, and degree of urgency) influenced waiting times. Cross-sectional study at four GP cooperatives. We used variance analysis to calculate waiting times for various categories of traffic intensity, home visit intensity, time of day, and degree of urgency. We used multiple logistic regression analysis to calculate to what degree these factors affected the ability to meet targets in urgent cases. The average waiting time for 5827 consultations was 30.5 min. Traffic intensity, home visit intensity, time of day and urgency of the complaint all seemed to affect waiting times significantly. A total of 88.7% of all patients were seen within 1 hour. In the case of life-threatening complaints (U1), 68.8% of the patients were seen within 15 min, and 95.6% of those with acute complaints (U2) were seen within 1 hour. For patients with life-threatening complaints (U1) the percentage of visits that met the time target of 15 minutes decreased from 86.5% (less than 2.5 km) to 16.7% (equals or more than 20 km). Although home visits waiting times increase with increasing distance from the GP cooperative, it appears that traffic intensity, home visit intensity, and urgency also influence waiting times. For patients with life-threatening complaints waiting times increase sharply with the distance.
Hoeksema, F.W.; Srinivasan, R.; Schiphorst, Roelof; Slump, Cornelis H.
2004-01-01
In joint timing and carrier offset estimation algorithms for Time Division Duplexing (TDD) OFDM systems, different timing metrics are proposed to determine the beginning of a burst or symbol. In this contribution we investigated the different timing metrics in order to establish their impact on the
CSIR Research Space (South Africa)
Bachoo, AK
2011-04-01
Full Text Available This work aims to evaluate the improvement in the performance of tracking small maritime targets due to real-time enhancement of the video streams from high zoom cameras on pan-tilt pedestal. Due to atmospheric conditions these images can frequently...
[Phylogeny and divergence time estimation of Schizothoracinae fishes in Xinjiang].
Ayelhan, Haysa; Guo, Yan; Meng, Wei; Yang, Tianyan; Ma, Yanwu
2014-10-01
Based on combined data of mitochondrial COI, ND4 and 16S RNA genes, molecular phylogeny of 4 genera, 10 species or subspecies of Schizothoracinae fishes distributed in Xinjiang were analyzed. The molecular clock was calibrated by divergence time of Cyprininae and geological segregation event between the upper Yellow River and Qinghai Lake. Divergence time of Schizothoracinae fishes was calculated, and its relationship with the major geological events and the climate changes in surrounding areas of Tarim Basin was discussed. The results showed that genus Aspiorhynchus did not form an independent clade, but clustered with Schizothorax biddulphi and S. irregularis. Kimura 2-parameter model was used to calculate the genetic distance of COI gene, the genetic distance between genus Aspiorhynchus and Schizothorax did not reach genus level, and Aspiorhynchus laticeps might be a specialized species of genus Schizothorax. Cluster analysis showed a different result with morphological classification method, and it did not support the subgenus division of Schizothorax fishes. Divergence of two groups of primitive Schizothoracinae (8.18Ma) and divergence of Gymnodiptychus dybowskii and Diptychus maculates (7.67Ma) occurred in late Miocene, which might be related with the separation of Kunlun Mountain and north Tianshan Mountain River system that was caused by the uplift of Qinghai-Tibet Plateau and Tianshan Mountain, and the aridification of Tarim Basin. The terrain of Tarim Basin that was affected by Quaternary Himalayan movement was high in west but low in east, as a result, Lop Nor became the center of surrounding mountain rivers in Tarim Basin, which shaped the distribution pattern of genus Schizothorax.
Estimating time to pregnancy from current durations in a cross-sectional sample
DEFF Research Database (Denmark)
Keiding, Niels; Kvist, Kajsa; Hartvig, Helle
2002-01-01
A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....
Directory of Open Access Journals (Sweden)
Igor Stubelj
2014-03-01
Full Text Available The paper deals with the estimation of weighted average cost of capital (WACC for regulated industries in developing financial markets from the perspective of the current financial-economic crisis. In current financial market situation some evident changes have occurred: risk-free rates in solid and developed financial markets (e. g. USA, Germany have fallen, but due to increased market volatility, the risk premiums have increased. The latter is especially evident in transition economies where the amplitude of market volatility is extremely high. In such circumstances, there is a question of how to calculate WACC properly. WACC is an important measure in financial management decisions and in our case, business regulation. We argue in the paper that the most accurate method for calculating WACC is the estimation of the long-term WACC, which takes into consideration a long-term stable yield of capital and not the current market conditions. Following this, we propose some solutions that could be used for calculating WACC for regulated industries on the developing financial markets in times of market uncertainty. As an example, we present an estimation of the capital cost for a selected Slovenian company, which operates in the regulated industry of electric distribution.
A soft-computing methodology for noninvasive time-spatial temperature estimation.
Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A
2008-02-01
The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.
Dávila-Batista, V; Carriedo, D; Díez, F; Pueyo Bastida, A; Martínez Durán, B; Martin, V
2018-03-01
The obesity pandemic together with the influenza pandemic could lead to a significant burden of disease. The body mass index (BMI) does not discriminate obesity appropriately. The CUN-BAE has recently been used as an estimate of body fatness for Caucasians, including BMI, gender, and age. The aim of this study is to assess the population attributable fraction of hospital admissions due to influenza, due to the body fatness measured with the BMI, and the CUN-BAE. A multicentre study was conducted using matched case-controls. Cases were hospital admissions with the influenza confirmed by the RT-PCR method between 2009 and 2011. The risk of hospital admission and the population attribuible fraction were calculated using the BMI or the CUN-BAE for each adiposity category in a conditional logical regression analysis adjusted for confounding variables. The analyzes were estimated in the total sample, in unvaccinated people, and those less than 65 years-old. A total of 472 hospitalised cases and 493 controls were included in the study. Compared to normal weight, the aOR of influenza hospital admissions increases with each level of BMI (aOR=1.26; 2.06 and 11.64) and CUN-BAE (aOR=2.78; 4.29; 5.43 and 15.18). The population attributable fraction of influenza admissions using CUN-BAE is 3 times higher than that estimated with BMI (0,72 vs. 0,27), with the differences found being similar the non-vaccinated and under 65 year-olds. The BMI could be underestimating the burden of disease attributable to obesity in individuals hospitalised with influenza. There needs to be an appropriate assessment of the impact of obesity and vaccine recommendation criteria. Copyright © 2017 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.
Wigner-Eisenbud-Smith photoionization time delay due to autoioinization resonances
Deshmukh, P. C.; Kumar, A.; Varma, H. R.; Banerjee, S.; Manson, Steven T.; Dolmatov, V. K.; Kheifets, A. S.
2018-03-01
An empirical ansatz for the complex photoionization amplitude and Wigner-Eisenbud-Smith time delay in the vicinity of a Fano autoionization resonance are proposed to evaluate and interpret the time delay in the resonant region. The utility of this expression is evaluated in comparison with accurate numerical calculations employing the ab initio relativistic random phase approximation and relativistic multichannel quantum defect theory. The indisputably good qualitative agreement (and semiquantitative agreement) between corresponding results of the proposed model and results produced by the ab initio theories proves the usability of the model. In addition, the phenomenology of the time delay in the vicinity of multichannel autoionizing resonances is detailed.
Real-Time Vehicle Routing for Repairing Damaged Infrastructures Due to Natural Disasters
Directory of Open Access Journals (Sweden)
Huey-Kuo Chen
2011-01-01
Full Text Available We address the task of repairing damaged infrastructures as a series of multidepot vehicle-routing problems with time windows in a time-rolling frame. The network size of the tackled problems changes from time to time, as new disaster nodes will be added to and serviced disaster nodes will be deleted from the current network. In addition, an inaccessible disaster node would become accessible when one of its adjacent disaster nodes has been repaired. By the “take-and-conquer” strategy, the repair sequence of the disaster nodes in the affected area can be suitably scheduled. Thirteen instances were tested with our proposed heuristic, that is, Chen et al.'s approach. For comparison, Hsueh et al.'s approach (2008 with necessary modification was also tested. The results show that Chen et al.'s approach performs slightly better for larger size networks in terms of objective value.
Analysis of Modal Travel Time Variability Due to Mesoscale Ocean Structure
National Research Council Canada - National Science Library
Smith, Amy
1997-01-01
.... First, for an open ocean environment away from strong boundary currents, the effects of randomly phased linear baroclinic Rossby waves on acoustic travel time are shown to produce a variable overall...
Energy Technology Data Exchange (ETDEWEB)
Takahashi, Ryuichi [Faculty of Science and Technology, Hirosaki University, 3 Bunkyo-cho, Hirosaki, Aomori 036-8561 (Japan)
2017-01-20
In this study we demonstrate that general relativity predicts arrival time differences between gravitational wave (GW) and electromagnetic (EM) signals caused by the wave effects in gravitational lensing. The GW signals can arrive earlier than the EM signals in some cases if the GW/EM signals have passed through a lens, even if both signals were emitted simultaneously by a source. GW wavelengths are much larger than EM wavelengths; therefore, the propagation of the GWs does not follow the laws of geometrical optics, including the Shapiro time delay, if the lens mass is less than approximately 10{sup 5} M {sub ⊙}( f /Hz){sup −1}, where f is the GW frequency. The arrival time difference can reach ∼0.1 s ( f /Hz){sup −1} if the signals have passed by a lens of mass ∼8000 M {sub ⊙}( f /Hz){sup −1} with the impact parameter smaller than the Einstein radius; therefore, it is more prominent for lower GW frequencies. For example, when a distant supermassive black hole binary (SMBHB) in a galactic center is lensed by an intervening galaxy, the time lag becomes of the order of 10 days. Future pulsar timing arrays including the Square Kilometre Array and X-ray detectors may detect several time lags by measuring the orbital phase differences between the GW/EM signals in the SMBHBs. Gravitational lensing imprints a characteristic modulation on a chirp waveform; therefore, we can deduce whether a measured arrival time lag arises from intrinsic source properties or gravitational lensing. Determination of arrival time differences would be extremely useful in multimessenger observations and tests of general relativity.
Leisure Time of Young Due to Some Socio-Demographic Characteristics
Ðuranovic, Marina; Opic, Siniša
2014-01-01
The aim of this paper is to explore the prevalence of activities in leisure time of the young. A survey was conducted on 1062 students in 8 primary (n=505; 47,6%) and high schools (n=557; 52,4%) in Sisak - Moslavina County in the Republic of Croatia. The questionnaire of spending leisure time used was made up of 30 variables on a five-degree scale…
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
A theory of timing in scintillation counters based on maximum likelihood estimation
International Nuclear Information System (INIS)
Tomitani, Takehiro
1982-01-01
A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)
Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.
2017-06-01
Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher
Real time bayesian estimation of the epidemic potential of emerging infectious diseases.
Directory of Open Access Journals (Sweden)
Luís M A Bettencourt
Full Text Available BACKGROUND: Fast changes in human demographics worldwide, coupled with increased mobility, and modified land uses make the threat of emerging infectious diseases increasingly important. Currently there is worldwide alert for H5N1 avian influenza becoming as transmissible in humans as seasonal influenza, and potentially causing a pandemic of unprecedented proportions. Here we show how epidemiological surveillance data for emerging infectious diseases can be interpreted in real time to assess changes in transmissibility with quantified uncertainty, and to perform running time predictions of new cases and guide logistics allocations. METHODOLOGY/PRINCIPAL FINDINGS: We develop an extension of standard epidemiological models, appropriate for emerging infectious diseases, that describes the probabilistic progression of case numbers due to the concurrent effects of (incipient human transmission and multiple introductions from a reservoir. The model is cast in terms of surveillance observables and immediately suggests a simple graphical estimation procedure for the effective reproductive number R (mean number of cases generated by an infectious individual of standard epidemics. For emerging infectious diseases, which typically show large relative case number fluctuations over time, we develop a bayesian scheme for real time estimation of the probability distribution of the effective reproduction number and show how to use such inferences to formulate significance tests on future epidemiological observations. CONCLUSIONS/SIGNIFICANCE: Violations of these significance tests define statistical anomalies that may signal changes in the epidemiology of emerging diseases and should trigger further field investigation. We apply the methodology to case data from World Health Organization reports to place bounds on the current transmissibility of H5N1 influenza in humans and establish a statistical basis for monitoring its evolution in real time.
Directory of Open Access Journals (Sweden)
Tanaka Ken-ichi
2017-01-01
Full Text Available Reliable information of radioactivity inventory resulted from the radiological characterization is important in order to plan decommissioning planning and is also crucial in order to promote decommissioning in effectiveness and in safe. The information is referred to by planning of decommissioning strategy and by an application to regulator. Reliable information of radioactivity inventory can be used to optimize the decommissioning processes. In order to perform the radiological characterization reliably, we improved a procedure of an evaluation of neutron-activated materials for a Boiling Water Reactor (BWR. Neutron-activated materials are calculated with calculation codes and their validity should be verified with measurements. The evaluation of neutron-activated materials can be divided into two processes. One is a distribution calculation of neutron-flux. Another is an activation calculation of materials. The distribution calculation of neutron-flux is performed with neutron transport calculation codes with appropriate cross section library to simulate neutron transport phenomena well. Using the distribution of neutron-flux, we perform distribution calculations of radioactivity concentration. We also estimate a time dependent distribution of radioactivity classification and a radioactive-waste classification. The information obtained from the evaluation is utilized by other tasks in the preparatory tasks to make the decommissioning plan and the activity safe and rational.
Tanaka, Ken-ichi; Ueno, Jun
2017-09-01
Reliable information of radioactivity inventory resulted from the radiological characterization is important in order to plan decommissioning planning and is also crucial in order to promote decommissioning in effectiveness and in safe. The information is referred to by planning of decommissioning strategy and by an application to regulator. Reliable information of radioactivity inventory can be used to optimize the decommissioning processes. In order to perform the radiological characterization reliably, we improved a procedure of an evaluation of neutron-activated materials for a Boiling Water Reactor (BWR). Neutron-activated materials are calculated with calculation codes and their validity should be verified with measurements. The evaluation of neutron-activated materials can be divided into two processes. One is a distribution calculation of neutron-flux. Another is an activation calculation of materials. The distribution calculation of neutron-flux is performed with neutron transport calculation codes with appropriate cross section library to simulate neutron transport phenomena well. Using the distribution of neutron-flux, we perform distribution calculations of radioactivity concentration. We also estimate a time dependent distribution of radioactivity classification and a radioactive-waste classification. The information obtained from the evaluation is utilized by other tasks in the preparatory tasks to make the decommissioning plan and the activity safe and rational.
Daylight saving time transitions and hospital treatments due to accidents or manic episodes
Directory of Open Access Journals (Sweden)
Lönnqvist Jouko
2008-02-01
Full Text Available Abstract Background Daylight saving time affects millions of people annually but its impacts are still widely unknown. Sleep deprivation and the change of circadian rhythm can trigger mental illness and cause higher accident rates. Transitions into and out of daylight saving time changes the circadian rhythm and may cause sleep deprivation. Thus it seems plausible that the prevalence of accidents and/or manic episodes may be higher after transition into and out of daylight saving time. The aim of this study was to explore the effects of transitions into and out of daylight saving time on the incidence of accidents and manic episodes in the Finnish population during the years of 1987 to 2003. Methods The nationwide data were derived from the Finnish Hospital Discharge Register. From the register we obtained the information about the hospital-treated accidents and manic episodes during two weeks before and two weeks after the transitions in 1987–2003. Results The results were negative, as the transitions into or out of daylight saving time had no significant effect on the incidence of accidents or manic episodes. Conclusion One-hour transitions do not increase the incidence of manic episodes or accidents which require hospital treatment.
Daylight saving time transitions and hospital treatments due to accidents or manic episodes
Lahti, Tuuli A; Haukka, Jari; Lönnqvist, Jouko; Partonen, Timo
2008-01-01
Background Daylight saving time affects millions of people annually but its impacts are still widely unknown. Sleep deprivation and the change of circadian rhythm can trigger mental illness and cause higher accident rates. Transitions into and out of daylight saving time changes the circadian rhythm and may cause sleep deprivation. Thus it seems plausible that the prevalence of accidents and/or manic episodes may be higher after transition into and out of daylight saving time. The aim of this study was to explore the effects of transitions into and out of daylight saving time on the incidence of accidents and manic episodes in the Finnish population during the years of 1987 to 2003. Methods The nationwide data were derived from the Finnish Hospital Discharge Register. From the register we obtained the information about the hospital-treated accidents and manic episodes during two weeks before and two weeks after the transitions in 1987–2003. Results The results were negative, as the transitions into or out of daylight saving time had no significant effect on the incidence of accidents or manic episodes. Conclusion One-hour transitions do not increase the incidence of manic episodes or accidents which require hospital treatment. PMID:18302734
Real time monitoring automation of dose rate absorbed in air due to environmental gamma radiation
International Nuclear Information System (INIS)
Dominguez Ley, Orlando; Capote Ferrera, Eduardo; Carrazana Gonzalez, Jorge A.; Manzano de Armas, Jose F.; Alonso Abad, Dolores; Prendes Alonso, Miguel; Tomas Zerquera, Juan; Caveda Ramos, Celia A.; Kalber, Olof; Fabelo Bonet, Orlando; Montalvan Estrada, Adelmo; Cartas Aguila, Hector; Leyva Fernandez, Julio C.
2005-01-01
The Center of Radiation Protection and Hygiene (CPHR) as the head institution of the National Radiological Environmental Surveillance Network (RNVRA) has strengthened its detection and response capacity for a radiological emergency situation. The measurements of gamma dose rate at the main point of the RNVRA are obtained in real time and the CPHR receives the data coming from those points in a short time. To achieve the operability of the RNVRA it was necessary to complete the existent monitoring facilities using 4 automatic gamma probes, implementing in this way a real time measurement system. The software, GenitronProbe for obtaining the data automatically from the probe, Data Mail , for sending the data via e-mail, and Gamma Red , for receiving and processing the data in the head institution ,were developed
Estimating and Analyzing Savannah Phenology with a Lagged Time Series Model
DEFF Research Database (Denmark)
Boke-Olen, Niklas; Lehsten, Veiko; Ardo, Jonas
2016-01-01
cycle due to their areal coverage and can have an effect on the food security in regions that depend on subsistence farming. In this study we investigate how soil moisture, mean annual precipitation, and day length control savannah phenology by developing a lagged time series model. The model uses...... climate data for 15 flux tower sites across four continents, and normalized difference vegetation index from satellite to optimize a statistical phenological model. We show that all three variables can be used to estimate savannah phenology on a global scale. However, it was not possible to create...... a simplified savannah model that works equally well for all sites on the global scale without inclusion of more site specific parameters. The simplified model showed no bias towards tree cover or between continents and resulted in a cross-validated r2 of 0.6 and root mean squared error of 0.1. We therefore...
Prediction of the pressure-time history due to fuel-sodium interaction in a subassembly
International Nuclear Information System (INIS)
Jacobs, H.
1975-01-01
A local cooling disturbance may lead to complete voiding of a subassembly and melt down of the fuel pins. Thus molten fuel may be accumulated and mixed with liquid sodium returning accidentally into the subassembly. The resulting fuel-sodium interaction (FSI) produces a pressure load on the surrounding core structures. It is necessary to prove that the corresponding core deformation neither initiates a nuclear excursion nor renders the shut down system inoperable. This requires the knowledge of the initiating FSI pressure time history. In this paper a theoretical pressure time history is presented which differs completely from all calculations known so far. (Auth.)
Mazonakis, Michalis; Tzedakis, Antonis; Lyraraki, Efrossyni; Damilakis, John
2016-09-01
Pigmented villonodular synovitis (PVNS) is a benign disease affecting synovial membranes of young and middle-aged adults. The aggressive treatment of this disorder often involves external-beam irradiation. This study was motivated by the lack of data relating to the radiation exposure of healthy tissues and radiotherapy-induced cancer risk. Monte Carlo methodology was employed to simulate a patient’s irradiation for PVNS in the knee and hip joints with a 6 MV photon beam. The average radiation dose received by twenty-two out-of-field critical organs of the human body was calculated. These calculations were combined with the appropriate organ-, age- and gender-specific risk coefficients of the BEIR-VII model to estimate the lifetime probability of cancer development. The risk for carcinogenesis to colon, which was partly included in the treatment fields used for hip irradiation, was determined with a non-linear mechanistic model and differential dose-volume histograms obtained by CT-based 3D radiotherapy planning. Risk assessments were compared with the nominal lifetime intrinsic risk (LIR) values. Knee irradiation to 36 Gy resulted in out-of-field organ doses of 0.2-24.6 mGy. The corresponding range from hip radiotherapy was 1.2-455.1 mGy whereas the organ equivalent dose for the colon was up to 654.9 mGy. The organ-specific cancer risks from knee irradiation for PVNS were found to be inconsequential since they were at least 161.5 times lower than the LIRs irrespective of the patient’s age and gender. The bladder and colon cancer risk from radiotherapy in the hip joint was up to 3.2 and 6.6 times smaller than the LIR, respectively. These cancer risks may slightly elevate the nominal incidence rates and they should not be ignored during the patient’s treatment planning and follow-up. The probabilities for developing any other solid tumor were more than 20 times lower than the LIRs and, therefore, they may be considered as small.
Loss of labor time due to malfunctioning ICTs and ICT skill insufficiencies
van Deursen, Alexander Johannes Aloysius Maria; van Dijk, Johannes A.G.M.
2014-01-01
Purpose – The purpose of this paper is to unexplore the area of information and communication technology (ICT) use in organizations related to the assumed productivity gains by the use of ICTs. On the one hand, the paper focus on the losses of labor time that are caused by malfunctioning hardware or
Paunonen, Matti
1993-01-01
A method for compensating for the effect of the varying travel time of a transmitted laser pulse to a satellite is described. The 'observed minus predicted' range differences then appear to be linear, which makes data screening or use in range gating more effective.
Directory of Open Access Journals (Sweden)
E. Delogu
2012-08-01
Full Text Available Evapotranspiration estimates can be derived from remote sensing data and ancillary, mostly meterorological, information. For this purpose, two types of methods are classically used: the first type estimates a potential evapotranspiration rate from vegetation indices, and adjusts this rate according to water availability derived from either a surface temperature index or a first guess obtained from a rough estimate of the water budget, while the second family of methods relies on the link between the surface temperature and the latent heat flux through the surface energy budget. The latter provides an instantaneous estimate at the time of satellite overpass. In order to compute daily evapotranspiration, one needs an extrapolation algorithm. Since no image is acquired during cloudy conditions, these methods can only be applied during clear sky days. In order to derive seasonal evapotranspiration, one needs an interpolation method. Two combined interpolation/extrapolation methods based on the self preservation of evaporative fraction and the stress factor are compared to reconstruct seasonal evapotranspiration from instantaneous measurements acquired in clear sky conditions. Those measurements are taken from instantaneous latent heat flux from 11 datasets in Southern France and Morocco. Results show that both methods have comparable performances with a clear advantage for the evaporative fraction for datasets with several water stress events. Both interpolation algorithms tend to underestimate evapotranspiration due to the energy limiting conditions that prevail during cloudy days. Taking into account the diurnal variations of the evaporative fraction according to an empirical relationship derived from a previous study improved the performance of the extrapolation algorithm and therefore the retrieval of the seasonal evapotranspiration for all but one datasets.
International Nuclear Information System (INIS)
Cho, S H
2005-01-01
A recent mice study demonstrated that gold nanoparticles could be safely administered and used to enhance the tumour dose during radiation therapy. The use of gold nanoparticles seems more promising than earlier methods because of the high atomic number of gold and because nanoparticles can more easily penetrate the tumour vasculature. However, to date, possible dose enhancement due to the use of gold nanoparticles has not been well quantified, especially for common radiation treatment situations. Therefore, the current preliminary study estimated this dose enhancement by Monte Carlo calculations for several phantom test cases representing radiation treatments with the following modalities: 140 kVp x-rays, 4 and 6 MV photon beams, and 192 Ir gamma rays. The current study considered three levels of gold concentration within the tumour, two of which are based on the aforementioned mice study, and assumed either no gold or a single gold concentration level outside the tumour. The dose enhancement over the tumour volume considered for the 140 kVp x-ray case can be at least a factor of 2 at an achievable gold concentration of 7 mg Au/g tumour assuming no gold outside the tumour. The tumour dose enhancement for the cases involving the 4 and 6 MV photon beams based on the same assumption ranged from about 1% to 7%, depending on the amount of gold within the tumour and photon beam qualities. For the 192 Ir cases, the dose enhancement within the tumour region ranged from 5% to 31%, depending on radial distance and gold concentration level within the tumour. For the 7 mg Au/g tumour cases, the loading of gold into surrounding normal tissue at 2 mg Au/g resulted in an increase in the normal tissue dose, up to 30%, negligible, and about 2% for the 140 kVp x-rays, 6 MV photon beam, and 192 Ir gamma rays, respectively, while the magnitude of dose enhancement within the tumour was essentially unchanged. (note)
Face to phase: pitfalls in time delay estimation from coherency phase
Campfens, S.F.; van der Kooij, Herman; Schouten, Alfred Christiaan
2014-01-01
Coherency phase is often interpreted as a time delay reflecting a transmission delay between spatially separated neural populations. However, time delays estimated from corticomuscular coherency are conflicting and often shorter than expected physiologically. Recent work suggests that
Freeway travel time estimation using existing fixed traffic sensors : phase 2.
2015-03-01
Travel time, one of the most important freeway performance metrics, can be easily estimated using the : data collected from fixed traffic sensors, avoiding the need to install additional travel time data collectors. : This project is aimed at fully u...
DOTD support for UTC project : travel time estimation using bluetooth, [research project capsule].
2013-10-01
Travel time estimates are useful tools for measuring congestion in an urban area. Current : practice involves using probe vehicles or video cameras to measure travel time, but this is a laborintensive and expensive means of obtaining the information....
2012-12-01
Estimates of value of time (VOT) and value of travel time savings (VTTS) are critical elements in benefitcost : analyses of transportation projects and in developing congestion pricing policies. In addition, : differences in VTTS among various modes ...
Methodology for Time-Domain Estimation of Storm-Time Electric Fields Using the 3D Earth Impedance
Kelbert, A.; Balch, C. C.; Pulkkinen, A. A.; Egbert, G. D.; Love, J. J.; Rigler, E. J.; Fujii, I.
2016-12-01
Magnetic storms can induce geoelectric fields in the Earth's electrically conducting interior, interfering with the operations of electric-power grid industry. The ability to estimate these electric fields at Earth's surface in close to real-time and to provide local short-term predictions would improve the ability of the industry to protect their operations. At any given time, the electric field at the Earth's surface is a function of the time-variant magnetic activity (driven by the solar wind), and the local electrical conductivity structure of the Earth's crust and mantle. For this reason, implementation of an operational electric field estimation service requires an interdisciplinary, collaborative effort between space science, real-time space weather operations, and solid Earth geophysics. We highlight in this talk an ongoing collaboration between USGS, NOAA, NASA, Oregon State University, and the Japan Meteorological Agency, to develop algorithms that can be used for scenario analyses and which might be implemented in a real-time, operational setting. We discuss the development of a time domain algorithm that employs discrete time domain representation of the impedance tensor for a realistic 3D Earth, known as the discrete time impulse response (DTIR), convolved with the local magnetic field time series, to estimate the local electric field disturbances. The algorithm is validated against measured storm-time electric field data collected in the United States and Japan. We also discuss our plans for operational real-time electric field estimation using 3D Earth impedances.
Horner, Andrew B; Beauchamp, James W; So, Richard H Y
2009-01-01
Gradated spectral interpolations between musical instrument tone pairs were used to investigate discrimination as a function of time-averaged spectral difference. All possible nonidentical pairs taken from a collection of eight musical instrument sounds consisting of bassoon, clarinet, flute, horn, oboe, saxophone, trumpet, and violin were tested. For each pair, several tones were generated with different balances between the primary and secondary instruments, where the balance was fixed across the duration of each tone. Among primary instruments it was found that changes to horn and bassoon [corrected] were most easily discriminable, while changes to saxophone and trumpet timbres were least discriminable. Among secondary instruments, the clarinet had the strongest effect on discrimination, whereas the bassoon had the least effect. For primary instruments, strong negative correlations were found between discrimination and their spectral incoherences, suggesting that the presence of dynamic spectral variations tends to increase the difficulty of detecting time-varying alterations such as spectral interpolation.
On the impact of topography and building mask on time varying gravity due to local hydrology
Deville, S.; Jacob, T.; Chéry, J.; Champollion, C.
2013-01-01
We use 3 yr of surface absolute gravity measurements at three sites on the Larzac plateau (France) to quantify the changes induced by topography and the building on gravity time-series, with respect to an idealized infinite slab approximation. Indeed, local topography and buildings housing ground-based gravity measurement have an effect on the distribution of water storage changes, therefore affecting the associated gravity signal. We first calculate the effects of surrounding topography and building dimensions on the gravity attraction for a uniform layer of water. We show that a gravimetric interpretation of water storage change using an infinite slab, the so-called Bouguer approximation, is generally not suitable. We propose to split the time varying gravity signal in two parts (1) a surface component including topographic and building effects (2) a deep component associated to underground water transfer. A reservoir modelling scheme is herein presented to remove the local site effects and to invert for the effective hydrological properties of the unsaturated zone. We show that effective time constants associated to water transfer vary greatly from site to site. We propose that our modelling scheme can be used to correct for the local site effects on gravity at any site presenting a departure from a flat topography. Depending on sites, the corrected signal can exceed measured values by 5-15 μGal, corresponding to 120-380 mm of water using the Bouguer slab formula. Our approach only requires the knowledge of daily precipitation corrected for evapotranspiration. Therefore, it can be a useful tool to correct any kind of gravimetric time-series data.
SiPM optical crosstalk amplification due to scintillator crystal: effects on timing performance
International Nuclear Information System (INIS)
Gola, Alberto; Ferri, Alessandro; Tarolli, Alessandro; Zorzi, Nicola; Piemonte, Claudio
2014-01-01
For a given photon detection efficiency (PDE), the primary, Poisson distributed, dark count rate of the detector (DCR 0 ) is one of the most limiting factors affecting the timing resolution of a silicon photomultiplier (SiPM) in the scintillation light readout. If the effects of DCR 0 are removed through a suitable baseline compensation algorithm or by cooling, it is possible to clearly observe another phenomenon that limits the PDE, and thus the timing resolution of the detector. It is caused by the optical crosstalk of the SiPM, which is significantly increased by the presence of the scintillator. In this paper, we describe this phenomenon, which is also easily observed from the reverse I–V curve of the device, and we relate it to the measured coincidence resolving time in 511 keV γ-ray measurements. We discuss its consequences on the SiPM design and, in particular, we observe that there is an optimal cell size, dependent on both SiPM and crystal parameters, that maximizes the PDE in presence of optical crosstalk. Finally, we report on a crosstalk simulator developed to study the phenomenon and we compare the simulation results obtained for different SiPM technologies, featuring different approaches to the reduction of the crosstalk. (paper)
International Nuclear Information System (INIS)
Lewis, C; Jiang, R; Chow, J
2015-01-01
Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describing the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system
A test of alternative estimators for volume at time 1 from remeasured point samples
Francis A. Roesch; Edwin J. Green; Charles T. Scott
1993-01-01
Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...
Ab initio quantum-enhanced optical phase estimation using real-time feedback control
DEFF Research Database (Denmark)
Berni, Adriano; Gehring, Tobias; Nielsen, Bo Melholt
2015-01-01
of a quantum-enhanced and fully deterministic ab initio phase estimation protocol based on real-time feedback control. Using robust squeezed states of light combined with a real-time Bayesian adaptive estimation algorithm, we demonstrate deterministic phase estimation with a precision beyond the quantum shot...... noise limit. The demonstrated protocol opens up new opportunities for quantum microscopy, quantum metrology and quantum information processing....
Time Estimation in Alzheimer's Disease and the Role of the Central Executive
Papagno, Costanza; Allegra, Adele; Cardaci, Maurizio
2004-01-01
The aim of this study was to evaluate the role of short-term memory and attention in time estimation. For this purpose we studied prospective time verbal estimation in 21 patients with Alzheimer's disease (AD), and compared their performance with that of 21 matched normal controls in two different conditions: during a digit span task and during an…
2010-03-01
The objectives of this project were to (a) produce historic estimates of travel times on Twin-Cities arterials : for 1995 and 2005, and (b) develop an initial architecture and database that could, in the future, produce timely : estimates of arterial...
Directory of Open Access Journals (Sweden)
Dario Cazzoli
Full Text Available Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a fatigue due to chronotypical factors (being a 'morning type' or an 'evening type'; b fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day.
Dephasing times in quantum dots due to elastic LO phonon-carrier collisions
DEFF Research Database (Denmark)
Uskov, A. V.; Jauho, Antti-Pekka; Tromborg, Bjarne
2000-01-01
Interpretation of experiments on quantum dot (QD) lasers presents a challenge: the phonon bottleneck, which should strongly suppress relaxation and dephasing of the discrete energy states, often seems to be inoperative. We suggest and develop a theory for an intrinsic mechanism for dephasing in Q......: second-order elastic interaction between quantum dot charge carriers and LO phonons. The calculated dephasing times are of the order of 200 fs at room temperature, consistent with experiments. The phonon bottleneck thus does not prevent significant room temperature dephasing....
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Osborne, Nicholas J.; Alcock, Ian; Wheeler, Benedict W.; Hajat, Shakoor; Sarran, Christophe; Clewlow, Yolanda; McInnes, Rachel N.; Hemming, Deborah; White, Mathew; Vardoulakis, Sotiris; Fleming, Lora E.
2017-10-01
Exposure to pollen can contribute to increased hospital admissions for asthma exacerbation. This study applied an ecological time series analysis to examine associations between atmospheric concentrations of different pollen types and the risk of hospitalization for asthma in London from 2005 to 2011. The analysis examined short-term associations between daily pollen counts and hospital admissions in the presence of seasonal and long-term patterns, and allowed for time lags between exposure and admission. Models were adjusted for temperature, precipitation, humidity, day of week, and air pollutants. Analyses revealed an association between daily counts (continuous) of grass pollen and adult hospital admissions for asthma in London, with a 4-5-day lag. When grass pollen concentrations were categorized into Met Office pollen `alert' levels, `very high' days (vs. `low') were associated with increased admissions 2-5 days later, peaking at an incidence rate ratio of 1.46 (95%, CI 1.20-1.78) at 3 days. Increased admissions were also associated with `high' versus `low' pollen days at a 3-day lag. Results from tree pollen models were inconclusive and likely to have been affected by the shorter pollen seasons and consequent limited number of observation days with higher tree pollen concentrations. Future reductions in asthma hospitalizations may be achieved by better understanding of environmental risks, informing improved alert systems and supporting patients to take preventive measures.
Time Dependent Frictional Changes in Ice due to Contact Area Changes
Sevostianov, V.; Lipovsky, B. P.; Rubinstein, S.; Dillavou, S.
2017-12-01
Sliding processes along the ice-bed interface of Earth's great ice sheets are the largest contributor to our uncertainty in future sea level rise. Laboratory experiments that have probed sliding processes have ubiquitously shown that ice-rock interfaces strengthen while in stationary contact (Schulson and Fortt, 2013; Zoet et al., 2013; McCarthy et al., 2017). This so-called frictional ageing effect may have profound consequences for ice sheet dynamics because it introduces the possibility of basal strength hysteresis. Furthermore this effect is quite strong in ice-rock interfaces (more than an order of magnitude more pronounced than in rock-rock sliding) and can double in frictional strength in a matter of minutes, much faster than most frictional aging (Dieterich, 1972; Baumberger and Caroli, 2006). Despite this importance, the underling physics of frictional ageing of ice remain poorly understood. Here we conduct laboratory experiments to image the microscopic points of contact along an ice-glass interface. We optically measure changes in the real area of contact over time using measurements of this reflected optical light intensity. We show that contact area increases with time of stationary contact. This result suggests that thermally enhanced creep of microscopic icy contacts is responsible for the much larger frictional ageing observed in ice-rock versus rock-rock interfaces. Furthermore, this supports a more physically detailed description of the thermal dependence of basal sliding than that used in the current generation of large scale ice sheet models.
Estimating the level of dynamical noise in time series by using fractal dimensions
International Nuclear Information System (INIS)
Sase, Takumi; Ramírez, Jonatán Peña; Kitajo, Keiichi; Aihara, Kazuyuki; Hirata, Yoshito
2016-01-01
We present a method for estimating the dynamical noise level of a ‘short’ time series even if the dynamical system is unknown. The proposed method estimates the level of dynamical noise by calculating the fractal dimensions of the time series. Additionally, the method is applied to EEG data to demonstrate its possible effectiveness as an indicator of temporal changes in the level of dynamical noise. - Highlights: • A dynamical noise level estimator for time series is proposed. • The estimator does not need any information about the dynamics generating the time series. • The estimator is based on a novel definition of time series dimension (TSD). • It is demonstrated that there exists a monotonic relationship between the • TSD and the level of dynamical noise. • We apply the proposed method to human electroencephalographic data.
Estimating the level of dynamical noise in time series by using fractal dimensions
Energy Technology Data Exchange (ETDEWEB)
Sase, Takumi, E-mail: sase@sat.t.u-tokyo.ac.jp [Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 153-8505 (Japan); Ramírez, Jonatán Peña [CONACYT Research Fellow, Center for Scientific Research and Higher Education at Ensenada (CICESE), Carretera Ensenada-Tijuana No. 3918, Zona Playitas, C.P. 22860, Ensenada, Baja California (Mexico); Kitajo, Keiichi [BSI-Toyota Collaboration Center, RIKEN Brain Science Institute, Wako, Saitama 351-0198 (Japan); Aihara, Kazuyuki; Hirata, Yoshito [Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 153-8505 (Japan); Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505 (Japan)
2016-03-11
We present a method for estimating the dynamical noise level of a ‘short’ time series even if the dynamical system is unknown. The proposed method estimates the level of dynamical noise by calculating the fractal dimensions of the time series. Additionally, the method is applied to EEG data to demonstrate its possible effectiveness as an indicator of temporal changes in the level of dynamical noise. - Highlights: • A dynamical noise level estimator for time series is proposed. • The estimator does not need any information about the dynamics generating the time series. • The estimator is based on a novel definition of time series dimension (TSD). • It is demonstrated that there exists a monotonic relationship between the • TSD and the level of dynamical noise. • We apply the proposed method to human electroencephalographic data.
International Nuclear Information System (INIS)
Paganetti, Harald
2005-01-01
Purpose: Dynamic radiation therapy, such as intensity-modulated radiation therapy, delivers more complex treatment fields than conventional techniques. The increased complexity causes longer dose delivery times for each fraction. The cellular damage after a full treatment may depend on the dose rate, because sublethal radiation damage can be repaired more efficiently during prolonged dose delivery. The goal of this study was to investigate the significance of this effect in fractionated radiation therapy. Methods and Materials: The lethal/potentially lethal model was used to calculate lesion induction rates for repairable and nonrepairable lesions. Dose rate effects were analyzed for 9 different cell lines (8 human tumor xenografts and a C3H10T1/2 cell line). The effects of single-fraction as well as fractionated irradiation for different dose rates were studied. Results: Significant differences can be seen for dose rates lower than about 0.1 Gy/min for all cell lines considered. For 60 Gy delivered in 30 fractions, the equivalent dose is reduced by between 1.3% and 12% comparing 2 Gy delivery over 30 min per fraction with 2 Gy delivery over 1 min per fraction. The effect is higher for higher doses per fraction. Furthermore, the results show that dose rate effects do not show a simple correlation with the α/β ratio for ratios between 3 Gy and 31 Gy. Conclusions: If the total dose delivery time for a treatment fraction in radiation therapy increases to about 20 min, a correction for dose rate effects may have to be considered in treatment planning. Adjustments in effective dose may be necessary when comparing intensity-modulated radiation therapy with conventional treatment plans
International Nuclear Information System (INIS)
Zeng, G.L.; Gullberg, G.T.
1995-01-01
It is common practice to estimate kinetic parameters from dynamically acquired tomographic data by first reconstructing a dynamic sequence of three-dimensional reconstructions and then fitting the parameters to time activity curves generated from the time-varying reconstructed images. However, in SPECT, the pharmaceutical distribution can change during the acquisition of a complete tomographic data set, which can bias the estimated kinetic parameters. It is hypothesized that more accurate estimates of the kinetic parameters can be obtained by fitting to the projection measurements instead of the reconstructed time sequence. Estimation from projections requires the knowledge of their relationship between the tissue regions of interest or voxels with particular kinetic parameters and the project measurements, which results in a complicated nonlinear estimation problem with a series of exponential factors with multiplicative coefficients. A technique is presented in this paper where the exponential decay parameters are estimated separately using linear time-invariant system theory. Once the exponential factors are known, the coefficients of the exponentials can be estimated using linear estimation techniques. Computer simulations demonstrate that estimation of the kinetic parameters directly from the projections is more accurate than the estimation from the reconstructed images
Brenner, Darren R
2014-09-01
This analysis aimed to estimate the number of incident cases of various cancers attributable to excess body weight (overweight, obesity) and leisure-time physical inactivity annually in Canada. The number of attributable cancers was estimated using the population attributable fraction (PAF), risk estimates from recent meta-analyses and population exposure prevalence estimates obtained from the Canadian Community Health Survey (2000). Age-sex-site-specific cancer incidence was obtained from Statistics Canada tables for the most up-to-date year with full national data, 2007. Where the evidence for association has been deemed sufficient, we estimated the number of incident cases of the following cancers attributable to obesity: colon, breast, endometrium, esophagus (adenocarcinomas), gallbladder, pancreas and kidney; and to physical inactivity: colon, breast, endometrium, prostate, lung and/or bronchus, and ovarian. Overall, estimates of all cancer incidence in 2007 suggest that at least 3.5% (n=5771) and 7.9% (n=12,885) are attributed to excess body weight and physical inactivity respectively. For both risk factors the burden of disease was greater among women than among men. Thousands of incident cases of cancer could be prevented annually in Canada as good evidence exists for effective interventions to reduce these risk factors in the population. Copyright © 2014. Published by Elsevier Inc.
Estimating time-varying RSA to examine psychophysiological linkage of marital dyads.
Gates, Kathleen M; Gatzke-Kopp, Lisa M; Sandsten, Maria; Blandon, Alysia Y
2015-08-01
One of the primary tenets of polyvagal theory dictates that parasympathetic influence on heart rate, often estimated by respiratory sinus arrhythmia (RSA), shifts rapidly in response to changing environmental demands. The current standard analytic approach of aggregating RSA estimates across time to arrive at one value fails to capture this dynamic property within individuals. By utilizing recent methodological developments that enable precise RSA estimates at smaller time intervals, we demonstrate the utility of computing time-varying RSA for assessing psychophysiological linkage (or synchrony) in husband-wife dyads using time-locked data collected in a naturalistic setting. © 2015 Society for Psychophysiological Research.
Bayesian switching factor analysis for estimating time-varying functional connectivity in fMRI.
Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod
2017-07-15
There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal
Directory of Open Access Journals (Sweden)
Stacie B Dusetzina
estimates, possibly due to unmeasured confounding. Although calendar time-specific propensity scores appear to improve covariate balance, the impact on comparative effectiveness results is limited in this setting.
Wireless data collection system for real-time arterial travel time estimates.
2011-03-01
This project pursued several objectives conducive to the implementation and testing of a Bluetooth (BT) based system to collect travel time data, including the deployment of a BT-based travel time data collection system to perform comprehensive testi...
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Life time estimation of SSCs for decommissioning safety of nuclear facilities
International Nuclear Information System (INIS)
Jeong, Kwan-Seong; Lee, Kune-Woo; Moon, Jei-Kwon; Jeong, Seong-Young; Lee, Jung-Jun; Kim, Geun-Ho; Choi, Byung-Seon
2012-01-01
Highlights: ► This paper suggests the expectation algorithm of SSCs life time for decommissioning safety of nuclear facilities. ► The life time of SSCs can be estimated by using fuzzy theory. ► The estimated results depend on the membership functions and performance characteristic functions. - Abstract: This paper suggests the estimation algorithm for life time of structure, system and components (SSCs) for decommissioning safety of nuclear facilities using the performance data of linguistic languages and fuzzy theory. The fuzzy estimation algorithm of life time can be easily applicable but the estimated results depend on the relevant membership functions and performance characteristic functions. This method will be expected to be very useful for maintenance and decommissioning of nuclear facilities’ SSCs as a safety assessment tool.
The current duration design for estimating the time to pregnancy distribution
DEFF Research Database (Denmark)
Gasbarra, Dario; Arjas, Elja; Vehtari, Aki
2015-01-01
This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently...... attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting...... times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators...
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical
Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling
Ballal, Tarig
2014-09-01
In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm
Environmental Kuznets revisited. Time-series versus panel estimation. The CO2-case
International Nuclear Information System (INIS)
Dijkgraaf, E.; Vollebergh, H.R.J.
1998-01-01
According to the Environmental Kuznets Curve (EKC) hypothesis economic growth and improving environmental quality are compatible. An inverted U-shaped relationship would exist between economic performance and environmental quality suggesting that after some threshold a growing economy would cause smaller pollution. Usually the EKC hypothesis is tested for pooled panel data of some (sub)set of environmental indicators and GDP. The essential assumption behind pooling the observations of different countries in one panel is that the outcome of the economic process would be the same for all countries with respect to emissions. That is, the curvature of the Income-Emission Relation (IER) is the same for the pooled countries as far as they have the same GDP range. In our study we show this methodology to be misleading for at least the special case of carbon dioxide (CO2) emissions and the GDP level. Using OECD-wide data from 1960 to 1990, we find that the pooled estimation results show positive auto-correlation. Other studies correct for this problem. However, as we show, it seems more likely that the estimated regression model is not correct due to the pooling of the country observations. Testing the model per country reveals that the IER is very different for countries. While some countries show growing carbon dioxide emissions per capita with increasing income, others show a stabilizing pattern or even an EKC. This indicates that estimations based on pooling techniques can bias the conclusion about the true IER leading to unjustified inferences on the existence of the EKC. Extending the basic model may result in the justification of pooling. However, our estimations including country specific variables, like population density, openness of the economy and the availability of own fuel sources (endowment effects), do not make us optimistic. The autocorrelation problem remains. If a more general model can not be found the only remaining conclusion is that testing the EKC
The fossilized birth–death process for coherent calibration of divergence-time estimates
Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja
2014-01-01
Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181
Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko
2017-07-01
Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.
On estimation of time-dependent attributable fraction from population-based case-control studies.
Zhao, Wei; Chen, Ying Qing; Hsu, Li
2017-09-01
Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Akanda Md. Abdus Salam
2017-03-01
Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.
International Nuclear Information System (INIS)
Stidley, C.A.; Samet, J.M.
1992-01-01
Studies of underground miners indicate that indoor radon is an important cause of lung cancer. This finding has raised concern that exposure to radon also causes lung cancer in the general population. Epidemiological studies, including both case-control and ecological approaches, have directly addressed the risks of indoor residential radon; many more case-control studies are in progress. Ecological studies that associate lung-cancer rates with typical indoor radon levels in various geographic areas have not consistently shown positive associations. The results of purportedly negative ecological studies have been used as a basis for questioning the hazards of indoor radon exposure. Because of potentially serious methodologic flaws for testing hypotheses, we examined the ecological method as a tool for assessing lung-cancer risk from indoor radon exposure. We developed a simulation approach that utilizes the Environmental Protection Agency (EPA) radon survey data to assign exposures to individuals within counties. Using the computer-generated data, we compared risk estimates obtained by ecological regression methods with those obtained from other regression methods and with the open-quotes trueclose quotes risks used to generate the data. For many of these simulations, the ecological models, while fitting the summary data well, gave risk estimates that differed considerably from the true risks. For some models, the risk estimates were negatively correlated with exposure, although the assumed relationship was positive. Attempts to improve the ecological models by adding smoking variables, including interaction terms, did not always improve the estimates of risk, which are easily affected by model misspecification. Because exposure situations used in the simulations are realistic, our results show that ecological methods may not accurately estimate the lung-cancer risk associated with indoor radon exposure
Directory of Open Access Journals (Sweden)
Qingwu Gao
2012-01-01
Full Text Available We discuss the uniformly asymptotic estimate of the finite-time ruin probability for all times in a generalized compound renewal risk model, where the interarrival times of successive accidents and all the claim sizes caused by an accident are two sequences of random variables following a wide dependence structure. This wide dependence structure allows random variables to be either negatively dependent or positively dependent.
On the background estimation by time slides in a network of gravitational wave detectors
International Nuclear Information System (INIS)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis
2010-01-01
Time shifting the outputs of gravitational wave detectors operating in coincidence is a convenient way to estimate the background in a search for short-duration signals. However, this procedure is limited as increasing indefinitely the number of time shifts does not provide better estimates. We show that the false alarm rate estimation error saturates with the number of time shifts. In particular, for detectors with very different trigger rates, this error saturates at a large value. Explicit computations are done for two detectors, and for three detectors where the detection statistic relies on the logical 'OR' of the coincidences of the three couples in the network.
On the background estimation by time slides in a network of gravitational wave detectors
Energy Technology Data Exchange (ETDEWEB)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Univ. Paris-Sud, CNRS/IN2P3, Orsay (France)
2010-01-07
Time shifting the outputs of gravitational wave detectors operating in coincidence is a convenient way to estimate the background in a search for short-duration signals. However, this procedure is limited as increasing indefinitely the number of time shifts does not provide better estimates. We show that the false alarm rate estimation error saturates with the number of time shifts. In particular, for detectors with very different trigger rates, this error saturates at a large value. Explicit computations are done for two detectors, and for three detectors where the detection statistic relies on the logical 'OR' of the coincidences of the three couples in the network.
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
DEFF Research Database (Denmark)
Nielsen, Torben Dahl; Kudahl, Anne Braad; Østergaard, S.
2013-01-01
and dynamic simulation model. The model incorporated six age groups (neonatal, pre-weaned calves, weaned calves, growing heifers, breeding heifers and cows) and five infection stages (susceptible, acutely infected, carrier, super shedder and resistant). The effects of introducing one S. Dublin infectious......Salmonella Dublin affects production and animal health in cattle herds. The objective of this study was to quantify the gross margin (GM) losses following introduction and spread of S. Dublin within dairy herds. The GM losses were estimated using an age-structured stochastic, mechanistic...... with poorer management and herd size, e.g. average annual GM losses were estimated to 49 euros per stall for the first year after infection, and to 8 euros per stall annually averaged over the 10 years after herd infection for a 200 cow stall herd with very good management. In contrast, a 200 cow stall herd...
Stationary echo canceling in velocity estimation by time-domain cross-correlation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
1993-01-01
The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated...
Tightness of M-estimators for multiple linear regression in time series
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Bent
We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires...
Roche, J. W.; Goulden, M.; Bales, R. C.
2017-12-01
Increased forest evapotranspiration (ET) coupled with snowpack decreases in a warming climate is likely to decrease runoff and increase forest drought stress. Field experiments and modeling suggest that forest thinning can reduce ET and thus increase potential runoff relative to untreated areas. We investigated the potential magnitude and duration of ET decreases resulting from forest-thinning treatments and fire using a robust empirical relation between Landsat-derived mean-annual normalized difference vegetation index (NDVI) and annual ET measured at flux towers. Among forest treatments, the minimum observed NDVI change required to produce a significant departure from control plots with NDVI of about 0.70 was -0.07 units, corresponding to a basal-area reduction of 3.1 m2 ha-1, and equivalent to an estimated ET reduction of -102 mm yr-1. Intensive thinning in highly productive forests that approached pre-fire-exclusion densities reduced basal area by 40-50%, generating estimated ET reductions of 152-216 mm yr-1 over five years following treatment. Between 1990 and 2008, fires in the American River basin generated more than twice the ET reduction per unit area than those in the Kings River basin, corresponding to greater water and energy limitations in the latter and greater fire severity in the former. A rough extrapolation of these results to the entire American River watershed, much of which would have burned naturally during this 19-year period, could result in ET reductions that approach 10% of full natural flows for drought years and 5% averaged over all years. This work demonstrates the potential utility to estimate forest ET change at the patch scale, which in turn may allow managers to estimate thinning benefits in areas lacking detailed hydrologic measurements.
Energy Technology Data Exchange (ETDEWEB)
Laurent, J.
2002-12-15
Today, power and energy consumption have become, as time and area, an important constraint when you design a system. Indeed modern applications use more and more processing and memory resources so these lead a significant increase of consumption. Furthermore, embedded software impact is preponderant in real time system so the code optimisation has a great impact onto the consumption constraint. Several research teams have already developed estimation methodologies for processor but almost are at instruction level (ILPA). With this kind of method you have to measure the consumption of each instruction of the instruction set and also the inter-instruction consumption overhead. For complex architecture, this kind of methodology is not adapted due to the prohibitive number of consumption measures. So the characterisation time of this kind of architecture is too important furthermore with this method is very difficult to take into account the external environment. For actual architecture another method is needed to reduce the characterisation time while preserving the accuracy. The reduction of the characterisation time have to be realized by increasing the abstraction level. So, we propose here a new approach based on a functional and architectural analysis of the target in consumption point of view (FLPA). Our methodology has two steps: the first one is a modeling step and the second is estimation step. (author)
Estimation of traffic recovery time for different flow regimes on freeways.
2008-06-01
This study attempts to estimate post-incident traffic recovery time along a freeway using Monte Carlo simulation techniques. It has been found that there is a linear relationship between post-incident traffic recovery time, and incident time and traf...
Zheng, Fangfang; van Zuylen, H.J.; Liu, Xiaobo
2017-01-01
Urban travel times are rather variable as a result of a lot of stochastic factors both in traffic flows, signals, and other conditions on the infrastructure. However, the most common way both in literature and practice is to estimate or predict only expected travel times, not travel time
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...
DYNAMIC STRAIN MAPPING AND REAL-TIME DAMAGE STATE ESTIMATION UNDER BIAXIAL RANDOM FATIGUE LOADING
National Aeronautics and Space Administration — DYNAMIC STRAIN MAPPING AND REAL-TIME DAMAGE STATE ESTIMATION UNDER BIAXIAL RANDOM FATIGUE LOADING SUBHASISH MOHANTY*, ADITI CHATTOPADHYAY, JOHN N. RAJADAS, AND CLYDE...
Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt; Sørensen, Michael
Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
Optimal replacement time estimation for machines and equipment based on cost function
J. Šebo; J. Buša; P. Demeč; J. Svetlík
2013-01-01
The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...
Directory of Open Access Journals (Sweden)
Yueyang Li
2014-01-01
Full Text Available This paper investigates the H∞ fixed-lag fault estimator design for linear discrete time-varying (LDTV systems with intermittent measurements, which is described by a Bernoulli distributed random variable. Through constructing a novel partially equivalent dynamic system, the fault estimator design is converted into a deterministic quadratic minimization problem. By applying the innovation reorganization technique and the projection formula in Krein space, a necessary and sufficient condition is obtained for the existence of the estimator. The parameter matrices of the estimator are derived by recursively solving two standard Riccati equations. An illustrative example is provided to show the effectiveness and applicability of the proposed algorithm.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Directory of Open Access Journals (Sweden)
Chaoyang Shi
2017-12-01
Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
International Nuclear Information System (INIS)
Sigaud, G.M.
1979-01-01
A dosimetric model for calculating the annual dose equivalent for an individual and the annual collective dose equivalent from 226 Ra is developed. This model is applied to the measured concentrations of 226 Ra in waters of the hydro graphic basins of the Pocos de Caldas plateau, using the pathways of drinking water and ingestion of food grown in irrigated fields. A linear model for simulating potential 226 Ra contamination of the waters of the region is also applied, and the doses from these contaminations are estimated using the dosimetric model developed. (author)
Near-real-time and scenario earthquake loss estimates for Mexico
Wyss, M.; Zuñiga, R.
2017-12-01
The large earthquakes of 8 September 2017, M8.1, and 19 September 2017, M7.1 have focused attention on the dangers of Mexican seismicity. The near-real-time alerts by QLARM estimated 10 to 300 fatalities and 0 to 200 fatalities, respectively. At the time of this submission the reported death tolls are 96 and 226, respectively. These alerts were issued within 96 and 57 minutes of the occurrence times. For the M8.1 earthquake the losses due to a line model could be calculated. The line with length L=110 km extended from the initial epicenter to the NE, where the USGS had reported aftershocks. On September 19, no aftershocks were available in near-real-time, so a point source had to be used for the quick calculation of likely casualties. In both cases, the casualties were at least an order of magnitude smaller than what they could have been because on 8 September the source was relatively far offshore and on 19 September the hypocenter was relatively deep. The largest historic earthquake in Mexico occurred on 28 March 1787 and likely had a rupture length of 450 km and M8.6. Based on this event, and after verifying our tool for Mexico, we estimated the order of magnitude of a disaster, given the current population, in a maximum credible earthquake along the Pacific coast. In the countryside along the coast we expect approximately 27,000 fatalities and 480,000 injured. In the special case of Mexico City the casualties in a worst possible earthquake along the Pacific plate boundary would likely be counted as five digit numbers. The large agglomerate of the capital with its lake bed soil attracts most attention. Nevertheless, one should pay attention to the fact that the poor, rural segment of society, living in buildings of weak resistance to shaking, are likely to sustain a mortality rate about 20% larger than the population in cities on average soil.
Real-time measurements and their effects on state estimation of distribution power system
DEFF Research Database (Denmark)
Han, Xue; You, Shi; Thordarson, Fannar
2013-01-01
between the estimated values (voltage and injected power) and the measurements are applied to evaluate the accuracy of the estimated grid states. Eventually, some suggestions are provided for the distribution grid operators on placing the real-time meters in the distribution grid.......This paper aims at analyzing the potential value of using different real-time metering and measuring instruments applied in the low voltage distribution networks for state-estimation. An algorithm is presented to evaluate different combinations of metering data using a tailored state estimator....... It is followed by a case study based on the proposed algorithm. A real distribution grid feeder with different types of meters installed either in the cabinets or at the customer side is selected for simulation and analysis. Standard load templates are used to initiate the state estimation. The deviations...
Power System Real-Time Monitoring by Using PMU-Based Robust State Estimation Method
DEFF Research Database (Denmark)
Zhao, Junbo; Zhang, Gexiang; Das, Kaushik
2016-01-01
Accurate real-time states provided by the state estimator are critical for power system reliable operation and control. This paper proposes a novel phasor measurement unit (PMU)-based robust state estimation method (PRSEM) to real-time monitor a power system under different operation conditions...... the system real-time states with good robustness and can address several kinds of BD.......-based bad data (BD) detection method, which can handle the smearing effect and critical measurement errors, is presented. We evaluate PRSEM by using IEEE benchmark test systems and a realistic utility system. The numerical results indicate that, in short computation time, PRSEM can effectively track...
Directory of Open Access Journals (Sweden)
P. J. Ni Made
2016-06-01
Full Text Available A large-scale earthquake and tsunami affect thousands of people and cause serious damages worldwide every year. Quick observation of the disaster damage is extremely important for planning effective rescue operations. In the past, acquiring damage information was limited to only field surveys or using aerial photographs. In the last decade, space-borne images were used in many disaster researches, such as tsunami damage detection. In this study, SAR data of ALOS/PALSAR satellite images were used to estimate tsunami damage in the form of inundation areas in Talcahuano, the area near the epicentre of the 2010 Chile earthquake. The image processing consisted of three stages, i.e. pre-processing, analysis processing, and post-processing. It was conducted using multi-temporal images before and after the disaster. In the analysis processing, inundation areas were extracted through the masking processing. It consisted of water masking using a high-resolution optical image of ALOS/AVNIR-2 and elevation masking which built upon the inundation height using DEM image of ASTER-GDEM. The area result was 8.77 Km2. It showed a good result and corresponded to the inundation map of Talcahuano. Future study in another area is needed in order to strengthen the estimation processing method.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Time-scales for runoff and erosion estimates, with implications for spatial scaling
Kirkby, M. J.; Irvine, B. J.; Dalen, E. N.
2009-04-01
Using rainfall data at high temporal resolution, runoff may be estimated for every bucket-tip, or for aggregated hourly or daily periods. Although there is no doubt that finer resolution gives substantially better estimates, many models make use of coarser time steps because these data are more widely available. This paper makes comparisons between runoff estimates based on infiltration measurements used with high resolution rainfall data for SE Spain and theoretical work on improving the time resolution in the PESERA model from daily to hourly values, for areas where these are available. For a small plot at fine temporal scale, runoff responds to bursts of intense rainfall which, for the Guadalentin catchment, typically lasts for about 30 minutes. However, when a larger area is considered, the large and unstructured variability in infiltration capacity produces an aggregate runoff that differs substantially from estimates using average infiltration parameters (in the Green-Ampt equation). When these estimates are compared with estimates based on rainfall for aggregated hourly or daily periods, using a simpler infiltration model, it can be seen that there a substantial scatter, as expected, but that suitable parameterisation can provide reasonable average estimates. Similar conclusions may be drawn for erosion estimates, assuming that sediment transport is proportional to a power of runoff discharge.. The spatial implications of these estimates can be made explicit with fine time resolution, showing that, with observed low overland flow velocities, only a small fraction of the hillside is generally able to deliver runoff to the nearest channel before rainfall intensity drops and runoff re-infiltrates. For coarser time resolutions, this has to be parameterised as a delivery ratio, and we show that how this ratio can be rationally estimated from rainfall characteristics.
Cara B. Fedick; Shelley Pacholok; Anne H. Gauthier
2005-01-01
Extensive small scale studies have documented that when people assume the role of assisting a person with impairments or an older person, care activities account for a significant portion of their daily routines. Nevertheless, little research has investigated the problem of measuring the time that carers spend in care-related activities. This paper contrasts two different measures of care time – an estimated average weekly hours question in the 1998 Australian Survey of Disability, Ageing and...
Liu, Peng; Wang, Xiaoli
2017-01-01
A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...
International Nuclear Information System (INIS)
Tyagi, K.; Jain, S.C.; Jain, P.C.
2001-01-01
ICRP Publications 53, 62 and 80 give organ dose coefficients and effective doses to ICRP Reference Man and Child from established nuclear medicine procedures. However, an average Indian adult differs significantly from the ICRP Reference Man as regards anatomical, physiological and metabolic characteristics, and is also considered to have different tissue weighting factors (called here risk factors). The masses of total body and most organs are significantly lower for the Indian adult than for his ICRP counterpart (e.g. body mass 52 and 70 kg respectively). Similarly, the risk factors are lower by 20-30% for 8 out of the 13 organs and 30-60% higher for 3 organs. In the present study, available anatomical data of Indians and their risk factors have been utilised to estimate the radiation doses from administration of commonly used 99 Tc m -labelled radiopharmaceuticals under normal and certain pathological conditions. The following pathological conditions have been considered for phosphates/phosphonates - high bone uptake and severely impaired kidney function; IDA - parenchymal liver disease, occlusion of cystic duct, and occlusion of bile duct; DTPA - abnormal renal function; large colloids - early to intermediate diffuse parenchymal liver disease, intermediate to advanced parenchymal liver disease; small colloids - early to intermediate parenchymal liver disease, intermediate to advanced parenchymal liver disease; and MAG3 - abnormal renal function, acute unilateral renal blockage. The estimated 'effective doses' to Indian adults are 14-21% greater than the ICRP value from administration of the same activity of radiopharmaceutical under normal physiological conditions based on anatomical considerations alone, because of the smaller organ masses for the Indian; for some pathological conditions the effective doses are 11-22% more. When tissue risk factors are considered in addition to anatomical considerations, the estimated effective doses are still found to be
Generalized synchronization-based multiparameter estimation in modulated time-delayed systems
Ghosh, Dibakar; Bhattacharyya, Bidyut K.
2011-09-01
We propose a nonlinear active observer based generalized synchronization scheme for multiparameter estimation in time-delayed systems with periodic time delay. A sufficient condition for parameter estimation is derived using Krasovskii-Lyapunov theory. The suggested tool proves to be globally and asymptotically stable by means of Krasovskii-Lyapunov method. With this effective method, parameter identification and generalized synchronization of modulated time-delayed systems with all the system parameters unknown, can be achieved simultaneously. We restrict our study for multiple parameter estimation in modulated time-delayed systems with single state variable only. Theoretical proof and numerical simulation demonstrate the effectiveness and feasibility of the proposed technique. The block diagram of electronic circuit for multiple time delay system shows that the method is easily applicable in practical communication problems.
Estimating urban vegetation fraction across 25 cities in pan-Pacific using Landsat time series data
Lu, Yuhao; Coops, Nicholas C.; Hermosilla, Txomin
2017-04-01
Urbanization globally is consistently reshaping the natural landscape to accommodate the growing human population. Urban vegetation plays a key role in moderating environmental impacts caused by urbanization and is critically important for local economic, social and cultural development. The differing patterns of human population growth, varying urban structures and development stages, results in highly varied spatial and temporal vegetation patterns particularly in the pan-Pacific region which has some of the fastest urbanization rates globally. Yet spatially-explicit temporal information on the amount and change of urban vegetation is rarely documented particularly in less developed nations. Remote sensing offers an exceptional data source and a unique perspective to map urban vegetation and change due to its consistency and ubiquitous nature. In this research, we assess the vegetation fractions of 25 cities across 12 pan-Pacific countries using annual gap-free Landsat surface reflectance products acquired from 1984 to 2012, using sub-pixel, spectral unmixing approaches. Vegetation change trends were then analyzed using Mann-Kendall statistics and Theil-Sen slope estimators. Unmixing results successfully mapped urban vegetation for pixels located in urban parks, forested mountainous regions, as well as agricultural land (correlation coefficient ranging from 0.66 to 0.77). The greatest vegetation loss from 1984 to 2012 was found in Shanghai, Tianjin, and Dalian in China. In contrast, cities including Vancouver (Canada) and Seattle (USA) showed stable vegetation trends through time. Using temporal trend analysis, our results suggest that it is possible to reduce noise and outliers caused by phenological changes particularly in cropland using dense new Landsat time series approaches. We conclude that simple yet effective approaches of unmixing Landsat time series data for assessing spatial and temporal changes of urban vegetation at regional scales can provide
The estimation of turnover time in the Japan Sea bottom water by 129I
International Nuclear Information System (INIS)
Suzuki, Takashi; Togawa, Orihiko; Minakawa, Masayuki
2010-01-01
It is well known that the Japan Sea is sensitive for the environment such a global warming. To understand the oceanic circulation in the Japan Sea, we estimated a turnover time and a potential formation rate of the Japan Sea bottom water (JSBW) using an oceanographic tracer of 129 I. The turnover time of JSBW was calculated based on the increased concentration during the nuclear era. The turnover time was estimated to be 180 - 210 years. The potential formation rate of JSBW is calculated based on the existence of the anthropogenic 129 I in the JSBW. The potential formation rate of JSBW is estimated to be (3.6-4.1) x 10 12 m 3 /y which is consistent with another estimation and is about quarter of that of the upper Japan Sea proper water. (author)
Parameter Estimation of a Closed Loop Coupled Tank Time Varying System using Recursive Methods
International Nuclear Information System (INIS)
Basir, Siti Nora; Yussof, Hanafiah; Shamsuddin, Syamimi; Selamat, Hazlina; Zahari, Nur Ismarrubie
2013-01-01
This project investigates the direct identification of closed loop plant using discrete-time approach. The uses of Recursive Least Squares (RLS), Recursive Instrumental Variable (RIV) and Recursive Instrumental Variable with Centre-Of-Triangle (RIV + COT) in the parameter estimation of closed loop time varying system have been considered. The algorithms were applied in a coupled tank system that employs covariance resetting technique where the time of parameter changes occur is unknown. The performances of all the parameter estimation methods, RLS, RIV and RIV + COT were compared. The estimation of the system whose output was corrupted with white and coloured noises were investigated. Covariance resetting technique successfully executed when the parameters change. RIV + COT gives better estimates than RLS and RIV in terms of convergence and maximum overshoot
Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Arunkumar, A.
2013-09-01
This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov-Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results.
International Nuclear Information System (INIS)
Vadivel, P; Sakthivel, R; Mathiyalagan, K; Arunkumar, A
2013-01-01
This paper addresses the issue of robust state estimation for a class of fuzzy bidirectional associative memory (BAM) neural networks with time-varying delays and parameter uncertainties. By constructing the Lyapunov–Krasovskii functional, which contains the triple-integral term and using the free-weighting matrix technique, a set of sufficient conditions are derived in terms of linear matrix inequalities (LMIs) to estimate the neuron states through available output measurements such that the dynamics of the estimation error system is robustly asymptotically stable. In particular, we consider a generalized activation function in which the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. More precisely, the design of the state estimator for such BAM neural networks can be obtained by solving some LMIs, which are dependent on the size of the time derivative of the time-varying delays. Finally, a numerical example with simulation result is given to illustrate the obtained theoretical results. (paper)
Nonlinear estimation of ring-down time for a Fabry-Perot optical cavity.
Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C
2011-03-28
This paper discusses the application of a discrete-time extended Kalman filter (EKF) to the problem of estimating the decay time constant for a Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The data for the estimation process is obtained from a CRDS experimental setup in terms of the light intensity at the output of the cavity. The cavity is held in lock with the input laser frequency by controlling the distance between the mirrors within the cavity by means of a proportional-integral (PI) controller. The cavity is purged with nitrogen and placed under vacuum before chopping the incident light at 25 KHz and recording the light intensity at its output. In spite of beginning the EKF estimation process with uncertainties in the initial value for the decay time constant, its estimates converge well within a small neighborhood of the expected value for the decay time constant of the cavity within a few ring-down cycles. Also, the EKF estimation results for the decay time constant are compared to those obtained using the Levenberg-Marquardt estimation scheme.
International Nuclear Information System (INIS)
Villalba, L.; Montero-Cabrera, M. E.; Manjon-Collado, G.; Colmenero-Sujo, L.; Renteria-Villalobos, M.; Cano-Jimenez, A.; Rodriguez-Pineda, A.; Davila-Rangel, I.; Quirino-Torres, L.; Herrera-Peraza, E. F.
2006-01-01
The activity concentration of 222 Rn, 226 Ra and total uranium in groundwater samples collected from wells distributed throughout the state of Chihuahua has been measured. The values obtained of total uranium activity concentration in groundwater throughout the state run from -1 . Generally, radium activity concentration was -1 , with some exceptions; in spring water of San Diego de Alcala, in contrast, the value reached ∼5.3 Bq l -1 . Radon activity concentration obtained throughout the state was from 1.0 to 39.8 Bq l -1 . A linear correlation between uranium and radon dissolved in groundwater of individual wells was observed near Chihuahua City. Committed effective dose estimates for reference individuals were performed, with results as high as 134 μSv for infants in Aldama city. In Aldama and Chihuahua cities the average and many individual wells showed activity concentration values of uranium exceeding the Mexican norm of drinking water quality. (authors)
The INCOTUR model : estimation of losses in the tourism sector in Alcudia due to a hydrocarbon spill
International Nuclear Information System (INIS)
Bergueiro, J.R.; Moreno, S.; Guijarro, S.; Santos, A.; Serr, F.
2006-01-01
This paper presented a computer model that calculates the economic losses incurred by a hydrocarbon spill on a coastal area. In particular, it focused on the Balearic Islands in the Bay of Alcudia where the economy depends mainly on tourism. A large number of oil tankers carrying crude oil and petroleum products pass through the Balearic Sea. Any pollution resulting from a fuel spill can have a significant economic impact on both the tourism sector and the Balearic society in general. This study focused on the simulation of 18 spills of Jet A1 fuel oil, unleaded gasoline and Bunker C fuel oil. Simulations of the study area were produced with OILMAP, MIKE21, GNOME and ADIOS models which estimated the trajectories of various spills and the amount of oil washed ashore. The change in physical and chemical properties of the spilled hydrocarbons was also determined. The simulation models considered the trajectory followed by spills according to the type and amount of spill, weather conditions prevailing during the spill and the period immediately following the spill. The INCOTUR model was then used to calculate the economic losses resulting from an oil spill by considering the number of tonnes of oil washed ashore; number of days needed to organize cleanup; the percentage of tourism that will be maintained despite the effects of the spill; number of hotel beds; percentage of hotel occupancy by month; cost of package holidays; petty cash expenses; and, cost of advertising campaign for the affected area. With this data, the model can determine the number of days needed to clean and restore the coastline; monthly rate of recovery in tourism levels; and, losses in tourism sector. According to the INCOTUR model, the total losses incurred by a spill of 40,000 tonnes of Bunker C fuel, was estimated at 472 million Euros. 9 refs., 2 tabs., 12 figs
The INCOTUR model : estimation of losses in the tourism sector in Alcudia due to a hydrocarbon spill
Energy Technology Data Exchange (ETDEWEB)
Bergueiro, J.R.; Moreno, S.; Guijarro, S.; Santos, A.; Serr, F. [Iles Balears Univ., Palma de Mallorca, Balearic Islands (Spain). Dept. of Chemistry
2006-07-01
This paper presented a computer model that calculates the economic losses incurred by a hydrocarbon spill on a coastal area. In particular, it focused on the Balearic Islands in the Bay of Alcudia where the economy depends mainly on tourism. A large number of oil tankers carrying crude oil and petroleum products pass through the Balearic Sea. Any pollution resulting from a fuel spill can have a significant economic impact on both the tourism sector and the Balearic society in general. This study focused on the simulation of 18 spills of Jet A1 fuel oil, unleaded gasoline and Bunker C fuel oil. Simulations of the study area were produced with OILMAP, MIKE21, GNOME and ADIOS models which estimated the trajectories of various spills and the amount of oil washed ashore. The change in physical and chemical properties of the spilled hydrocarbons was also determined. The simulation models considered the trajectory followed by spills according to the type and amount of spill, weather conditions prevailing during the spill and the period immediately following the spill. The INCOTUR model was then used to calculate the economic losses resulting from an oil spill by considering the number of tonnes of oil washed ashore; number of days needed to organize cleanup; the percentage of tourism that will be maintained despite the effects of the spill; number of hotel beds; percentage of hotel occupancy by month; cost of package holidays; petty cash expenses; and, cost of advertising campaign for the affected area. With this data, the model can determine the number of days needed to clean and restore the coastline; monthly rate of recovery in tourism levels; and, losses in tourism sector. According to the INCOTUR model, the total losses incurred by a spill of 40,000 tonnes of Bunker C fuel, was estimated at 472 million Euros. 9 refs., 2 tabs., 12 figs.
Energy Technology Data Exchange (ETDEWEB)
Sung, Jiwon; Baek, Taeseong; Yoon, Myonggeun [Korea University, Seoul (Korea, Republic of); Kim, Dongwook; Kim, Donghyun [Kyung Hee University Hospital at Gangdong, Seoul (Korea, Republic of)
2014-09-15
This study evaluated the effect of a simple shielding method using a thin lead sheet on the imaging dose caused by cone-beam computed tomography (CBCT) in image-guided radiation therapy (IGRT). Reduction of secondary doses from CBCT was measured using a radio-photoluminescence glass dosimeter (RPLGD) placed inside an anthropomorphic phantom. The entire body, except for the region scanned by using CBCT, was shielded by wrapping it with a 2-mm lead sheet. Changes in secondary cancer risk due to shielding were calculated using BEIR VII models. Doses to out-of-field organs for head-and-neck, chest, and pelvis scans were decreased 15 ∼ 100%, 23 ∼ 90%, and 23 ∼ 98%, respectively, and the average reductions in lifetime secondary cancer risk due to the 2-mm lead shielding were 1.6, 11.5, and 12.7 persons per 100,000, respectively. These findings suggest that a simple, thin-lead-sheet-based shielding method can effectively decrease secondary doses to out-of-field regions for CBCT, which reduces the lifetime cancer risk on average by 9 per 100,000 patients.
ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS
Directory of Open Access Journals (Sweden)
Z.-G. Zhou
2016-06-01
Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.
Energy and round time estimation method for mobile wireless sensor networks
International Nuclear Information System (INIS)
Ismat, M.; Qureshi, R.; Imam, M.U.
2018-01-01
Clustered WSN (Wireless Sensor Networks) is a hierarchical network structure that conserves energy by distributing the task of sensing and data transfer to destination among the non-CH (Cluster-Head) and CH (Cluster Head) node in a cluster. In clustered MWSN (Mobile Wireless Sensor Network), cluster maintenance to increase at a reception at the destination during communication operation is difficult due to the movement of CHs and non-CH nodes in and out of the cluster. To conserve energy and increased data transfer to the destination, it is necessary to find the duration after which sensor node’s role should be changed from CH to non-CH and vice-versa. In this paper, we have proposed an energy independent round time scheme to identify the duration after which re-clustering procedure should be invoked for changing roles of sensor nodes as CHs and associated nodes to conserve energy and increased data delivery. This depends on the dissemination interval of the sensor nodes rather than sensor node’s energy. We have also provided a complete analytical estimate of network energy consumption with energy consumed in every phase of a around. (author)
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
Belinato, Walmir; Santos, William S.; Perini, Ana P.; Neves, Lucio P.; Caldas, Linda V. E.; Souza, Divanizia N.
2017-11-01
Positron emission tomography (PET) has revolutionized the diagnosis of cancer since its conception. When combined with computed tomography (CT), PET/CT performed in children produces highly accurate diagnoses from images of regions affected by malignant tumors. Considering the high risk to children when exposed to ionizing radiation, a dosimetric study for PET/CT procedures is necessary. Specific absorbed fractions (SAF) were determined for monoenergetic photons and positrons, as well as the S-values for six positron emitting radionuclides (11C, 13N, 18F, 68Ga, 82Rb, 15O), and 22 source organs. The study was performed for six pediatric anthropomorphic hybrid models, including the newborn and 1 year hermaphrodite, 5 and 10-year-old male and female, using the Monte Carlo N-Particle eXtended code (MCNPX, version 2.7.0). The results of the SAF in source organs and S-values for all organs showed to be inversely related to the age of the phantoms, which includes the variation of body weight. The results also showed that radionuclides with higher energy peak emission produces larger auto absorbed S-values due to local dose deposition by positron decay. The S-values for the source organs are considerably larger due to the interaction of tissue with non-penetrating particles (electrons and positrons) and present a linear relationship with the phantom body masses. The results of the S-values determined for positron-emitting radionuclides can be used to assess the radiation dose delivered to pediatric patients subjected to PET examination in clinical settings. The novelty of this work is associated with the determination of auto absorbed S-values, in six new pediatric virtual anthropomorphic phantoms, for six emitting positrons, commonly employed in PET exams.
The effects of resonances on time delay estimation for water leak detection in plastic pipes
Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Gao, Yan; Paschoalini, Amarildo T.
2018-04-01
In the use of acoustic correlation methods for water leak detection, sensors are placed at pipe access points either side of a suspected leak, and the peak in the cross-correlation function of the measured signals gives the time difference (delay) between the arrival times of the leak noise at the sensors. Combining this information with the speed at which the leak noise propagates along the pipe, gives an estimate for the location of the leak with respect to one of the measurement positions. It is possible for the structural dynamics of the pipe system to corrupt the time delay estimate, which results in the leak being incorrectly located. In this paper, data from test-rigs in the United Kingdom and Canada are used to demonstrate this phenomenon, and analytical models of resonators are coupled with a pipe model to replicate the experimental results. The model is then used to investigate which of the two commonly used correlation algorithms, the Basic Cross-Correlation (BCC) function or the Phase Transform (PHAT), is more robust to the undesirable structural dynamics of the pipe system. It is found that time delay estimation is highly sensitive to the frequency bandwidth over which the analysis is conducted. Moreover, it is found that the PHAT is particularly sensitive to the presence of resonances and can give an incorrect time delay estimate, whereas the BCC function is found to be much more robust, giving a consistently accurate time delay estimate for a range of dynamic conditions.
DEFF Research Database (Denmark)
Perez, Angel; Møller, Jakob Glarbo; Jóhannsson, Hjörtur
2014-01-01
This article studies the influence of PMU’s accuracy in voltage stability assessment, considering the specific case of Th ́ evenin equivalent based methods that include wide-area information in its calculations. The objective was achieved by producing a set of synthesized PMU measurements from...... a time domain simulation and using the Monte Carlo method to reflect the accuracy for the PMUs. This is given by the maximum value for the Total Vector Error defined in the IEEE standard C37.118. Those measurements allowed to estimate the distribution pa- rameters (mean and standard deviation...
Energy Technology Data Exchange (ETDEWEB)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.; Wang, G.; Sung, C.; Peebles, W. A. [Physics and Astronomy Department, University of California, Los Angeles, California 90095 (United States); Bobrek, M. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6006 (United States)
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layer density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.
Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems
DEFF Research Database (Denmark)
Georgiadis, Stylianos; Limnios, Nikolaos
2016-01-01
In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…
Impact of time displaced precipitation estimates for on-line updated models
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2012-01-01
When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...
Estimating the Probability of a Rare Event Over a Finite Time Horizon
de Boer, Pieter-Tjerk; L'Ecuyer, Pierre; Rubino, Gerardo; Tuffin, Bruno
2007-01-01
We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Estimating primary production from oxygen time series: A novel approach in the frequency domain
Cox, T.J.S.; Maris, T.; Soetaert, K.; Kromkamp, J.C.; Meire, P.; Meysman, F.J.R.
2015-01-01
Based on an analysis in the frequency domain of the governing equation of oxygen dynamics in aquatic systems, we derive a new method for estimating gross primary production (GPP) from oxygen time series. The central result of this article is a relation between time averaged GPP and the amplitude of
Nonparametric estimation in an "illness-death" model when all transition times are interval censored
DEFF Research Database (Denmark)
Frydman, Halina; Gerds, Thomas; Grøn, Randi
2013-01-01
We develop nonparametric maximum likelihood estimation for the parameters of an irreversible Markov chain on states {0,1,2} from the observations with interval censored times of 0 → 1, 0 → 2 and 1 → 2 transitions. The distinguishing aspect of the data is that, in addition to all transition times ...
Bueno, Marta; Camacho, Carlos J; Sancho, Javier
2007-09-01
The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.
Ródenas, José
2017-11-01
All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.
International Nuclear Information System (INIS)
Hurtado, A.; Eguilior, S.; Recreo, F.
2015-01-01
From the consideration of a contemporary society based on the need of a high-level complex technology with a high intrinsic level of uncertainty and its relationship with risk assessment, this analysis, conducted in late 2014, was developed from that that led the Secretary of State for the Environment to the Resolution of 29 May 2014, by which the Environmental Impact Statement of the Exploratory Drilling Project in the hydrocarbons research permits called ''Canarias 1-9// was set out and published in the Spanish Official State Gazette number 196 on 13rd August 2014. The aim of the present study is to analyze the suitability with which the worst case associated probability is identified and defined and its relation to the total risk estimate from a blow out. Its interest stems from the fact that all risk management methodologically rests on two pillars, i.e., on a sound risk analysis and evaluation. This determines the selection of management tools in relation to its level of complexity, the project phase and its potential impacts on the health, safety and environmental contamination dimensions.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Mesospheric temperature estimation from meteor decay times of weak and strong meteor trails
Kim, Jeong-Han; Kim, Yong Ha; Jee, Geonhwa; Lee, Changsup
2012-11-01
Neutral temperatures near the mesopause region were estimated from the decay times of the meteor echoes observed by a VHF meteor radar during a period covering 2007 to 2009 at King Sejong Station (62.22°S, 58.78°W), Antarctica. While some previous studies have used all meteor echoes to determine the slope from a height profile of log inverse decay times for temperature estimation, we have divided meteor echoes into weak and strong groups of underdense meteor trails, depending on the strength of estimated relative electron line densities within meteor trails. We found that the slopes from the strong group are inappropriate for temperature estimation because the decay times of strong meteors are considerably scattered, whereas the slopes from the weak group clearly define the variation of decay times with height. We thus utilize the slopes only from the weak group in the altitude region between 86 km and 96 km to estimate mesospheric temperatures. The meteor estimated temperatures show a typical seasonal variation near the mesopause region and the monthly mean temperatures are in good agreement with SABER temperatures within a mean difference of 4.8 K throughout the year. The meteor temperatures, representing typically the region around the altitude of 91 km, are lower on average by 2.1 K than simultaneously measured SATI OH(6-2) rotational temperatures during winter (March-October).
Directory of Open Access Journals (Sweden)
Il Young Song
2015-01-01
Full Text Available This paper focuses on estimation of a nonlinear function of state vector (NFS in discrete-time linear systems with time-delays and model uncertainties. The NFS represents a multivariate nonlinear function of state variables, which can indicate useful information of a target system for control. The optimal nonlinear estimator of an NFS (in mean square sense represents a function of the receding horizon estimate and its error covariance. The proposed receding horizon filter represents the standard Kalman filter with time-delays and special initial horizon conditions described by the Lyapunov-like equations. In general case to calculate an optimal estimator of an NFS we propose using the unscented transformation. Important class of polynomial NFS is considered in detail. In the case of polynomial NFS an optimal estimator has a closed-form computational procedure. The subsequent application of the proposed receding horizon filter and nonlinear estimator to a linear stochastic system with time-delays and uncertainties demonstrates their effectiveness.
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
International Nuclear Information System (INIS)
Barnett, C.S.
1991-01-01
The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. (Author)
Estimation of Nuclei Cooling Time by Electrons in Superdense Nonequilibrium Plasma
Kostenko, B F
2004-01-01
Estimations of nuclei cooling time by electrons in superdense nonequilibrium plasma formed at cavitation bubble collapse in deuterated acetone have been carried out. The necessity of these computations was stipulated by using in the latest theoretical calculations of nuclear reaction rate in these processes one poorly grounded assumption that electron temperatures remain essentially lower than nuclei ones during thermonuclear synthesis time t_s. The estimations have shown that the initial electron temperatures at the moment of superdense plasma formation with \\rho =100 g/cm^3 turn out to be appreciably lower than the nuclear temperatures, while the nuclei cooling time is of the same order as t_s.
DeepTravel: a Neural Network Based Travel Time Estimation Model with Auxiliary Supervision
Zhang, Hanyuan; Wu, Hao; Sun, Weiwei; Zheng, Baihua
2018-01-01
Estimating the travel time of a path is of great importance to smart urban mobility. Existing approaches are either based on estimating the time cost of each road segment which are not able to capture many cross-segment complex factors, or designed heuristically in a non-learning-based way which fail to utilize the existing abundant temporal labels of the data, i.e., the time stamp of each trajectory point. In this paper, we leverage on new development of deep neural networks and propose a no...
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Influence of hypo- and hyperthermia on death time estimation - A simulation study.
Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G
2017-09-01
Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.
A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations
Directory of Open Access Journals (Sweden)
Aranzazu Berbey Alvarez
2015-04-01
Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.
Omer, Muhammad
2012-07-01
This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.
Marković, D.; Koch, M.
2005-09-01
The influence of the periodic signals in time series on the Hurst parameter estimate is investigated with temporal, spectral and time-scale methods. The Hurst parameter estimates of the simulated periodic time series with a white noise background show a high sensitivity on the signal to noise ratio and for some methods, also on the data length used. The analysis is then carried on to the investigation of extreme monthly river flows of the Elbe River (Dresden) and of the Rhine River (Kaub). Effects of removing the periodic components employing different filtering approaches are discussed and it is shown that such procedures are a prerequisite for an unbiased estimation of H. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.
International Nuclear Information System (INIS)
Liao, Pingping; Cai, Maolin; Shi, Yan; Fan, Zichuan
2013-01-01
The conventional ultrasonic method for compressed air leak detection utilizes a directivity-based ultrasonic leak detector (DULD) to locate the leak. The location accuracy of this method is low due to the limit of the nominal frequency and the size of the ultrasonic sensor. In order to overcome this deficiency, a method based on time delay estimation (TDE) is proposed. The method utilizes three ultrasonic sensors arranged in an equilateral triangle to simultaneously receive the ultrasound generated by the leak. The leak can be located according to time delays between every two sensor signals. The theoretical accuracy of the method is analyzed, and it is found that the location error increases linearly with delay estimation error and the distance from the leak to the sensor plane, and the location error decreases with the distance between sensors. The average square difference function delay estimator with parabolic fitting is used and two practical techniques are devised to remove the anomalous delay estimates. Experimental results indicate that the location accuracy using the TDE-based ultrasonic leak detector is 6.5–8.3 times as high as that using the DULD. By adopting the proposed method, the leak can be located more accurately and easily, and then the detection efficiency is improved. (paper)
Directory of Open Access Journals (Sweden)
Farhad Habibi
2018-09-01
Full Text Available Among different factors, correct scheduling is one of the vital elements for project management success. There are several ways to schedule projects including the Critical Path Method (CPM and Program Evaluation and Review Technique (PERT. Due to problems in estimating dura-tions of activities, these methods cannot accurately and completely model actual projects. The use of fuzzy theory is a basic way to improve scheduling and deal with such problems. Fuzzy theory approximates project scheduling models to reality by taking into account uncertainties in decision parameters and expert experience and mental models. This paper provides a step-by-step approach for accurate estimation of time and cost of projects using the Project Evaluation and Review Technique (PERT and expert views as fuzzy numbers. The proposed method included several steps. In the first step, the necessary information for project time and cost is estimated using the Critical Path Method (CPM and the Project Evaluation and Review Technique (PERT. The second step considers the duration and cost of the project activities as the trapezoidal fuzzy numbers, and then, the time and cost of the project are recalculated. The duration and cost of activities are estimated using the questionnaires as well as weighing the expert opinions, averaging and defuzzification based on a step-by-step algorithm. The calculating procedures for evaluating these methods are applied in a real project; and the obtained results are explained.
Energy Technology Data Exchange (ETDEWEB)
Okada, Yoshikazu; Shima, Takeshi; Nishida, Masahiro; Yamane, Kanji; Okita, Shinji; Hatayama, Takashi; Yoshida, Akira; Naoe, Yasutaka; Shiga, Naoko (Chugoku Rosai Hospital, Hiroshima (Japan))
1994-05-01
Delayed vasospasm due to ruptured aneurysm has been basically evaluated by angiographic changes in contrast to clinical features such as delayed ischemic neurological deficits (DIND). However, the discrepancies between angiographic and clinical findings have been pointed out. In this study, angiographic changes and cerebral circulation time in ruptured aneurysms were simultaneously investigated with IA-DSA. Thirty-two patients, who had ruptured aneurysms at the anterior circle of Willis and neck clippings at the acute stage, were investigated. Carotid angiogram was performed with IA-DSA on the 7-13th day after the attack. Angiographic changes were evaluated by Fischer's classification and circulation time was calculated in the following way. A time-density curve was obtained at the two ROI's: the C3-C4 portion and the rolandic vein. Circulation time was defined by the difference between the time showing peak optical density at the carotid and the venous portion. The control value of this circulation time obtained from 20 cases with non-rupture aneurysm and epilepsy was 3.4 sec (53 year old) on the average. X-ray CT scan examination was performed at the same time and clinical features were observed every day. Angiographically, 3 cases were free from vasospasm, 18 cases were found to present slight to moderate vasospasm, and 11 cases showed severe vasospasm. Circulation time in patients with no spasm was 3.6 seconds, in patients with slight to moderate vasospasm it was 4.3 seconds and in patients with severe vasospasm it was 6.8 seconds. Ten patients showing cerebral infarction on CT scans demonstrated significantly long circulation time, 7.0 seconds on the average. And all patients having severe vasospasm with circulation time more than 6 seconds presented DIND such as hemiparesis. (author).
Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge
2015-01-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.
Energy Technology Data Exchange (ETDEWEB)
Nwankwo, Levi I. [Department of Physics, University of Ilorin, Ilorin 240003 (Nigeria)
2014-07-01
Natural radioactivity measurements in drinking water have been performed in many parts of the world, mostly for assessment of the doses and risk resulting from consuming water. A study of the radionuclide concentrations in groundwater samples collected from wells distributed within Ilorin, west of central Nigeria has been carried out. Twenty Eight (28) water samples were analyzed by gamma ray spectroscopy to determine the {sup 226}Ra, {sup 228}Ra, and {sup 40}K concentrations. The specific activity values ranged from 0.02 to 7.4 Bq/l for {sup 226}Ra, 0.009 to 5.6 Bq/l for {sup 228}Ra, and 0.45 to 30.14 Bq/l for {sup 40}K. The annual ingestions of these radionuclides, using local consumption rates (average over the whole population) of 1 liter per day, were subsequently estimated to range from 0 to 0.8 mSv/y with an average of 0.36 mSv/y, 0 to 1.42 mSv/y with an average of 0.50 mSv/y, and 0 to 0.01 mSv/y with an average of 0.01 mSv/y for {sup 226}Ra, {sup 228}Ra, and {sup 40}K respectively. The results show that the mean annual effective dose values received as a result of the combined ingestion of the radionuclides from many individual wells in the study area exceed the norm of drinking water quality established by UNSCEAR/WHO. Efforts should therefore be made by policy makers to protect the populace from long-term health consequences. (authors)
International Nuclear Information System (INIS)
Kumar, S.; Singh, S.; Bajwa, B. S.; Singh, B.; Sabharwal, A. D.; Eappen, K. P.
2008-01-01
LR-115 (type II)-based radon-thoron discriminating twin-chamber dosemeters have been used for estimating radon 222 Rn) and thoron 220 Rn) concentrations in dwellings of south-western Punjab (India)). The present study region has shown pronounced cases of cancer incidents in the public [Thakur, Rao, Rajwanshi, Parwana and Kumar (Epidemiological study of high cancer among rural agricultural community of Punjab in Northern India. Int J Environ Res Public Health 2008; 5(5):399-407) and Kumar et al. (Risk assessment for natural uranium in subsurface water of Punjab state (India)). Hum Ecol Risk Assess 2011;17:381-93)]. Radon being a carcinogen has been monitored in some dwellings selected randomly in the study area. Results show that the values of radon 222 Rn) varied from 21 to 79 Bq m -3 , with a geometric mean of 45 Bq m -3 [geometric standard deviation (GSD 1.39)], and those of thoron 220 Rn) from minimum detection level to 58 Bq m -3 with a geometric mean of 19 Bq m -3 (GSD 1.88). Bare card data are used for computing the progeny concentration by deriving the equilibrium factor (F) using a root finding method [Mayya, Eappen and Nambi (Methodology for mixed field inhalation dosimetry in monazite areas using a twin-cup dosemeter with three track detectors. Radiat Prot Dosim 1998; 77(3): 177-84)]. Inhalation doses have been calculated and compared using UNSCEAR equilibrium factors and by using the calculated F-values. The results show satisfactory comparison between the values. (authors)
Directory of Open Access Journals (Sweden)
Lindita Hamolli
2015-01-01
Full Text Available In recent years free-floating planets (FFPs have drawn a great interest among astrophysicists. Gravitational microlensing is a unique and exclusive method for their investigation which may allow obtaining precious information about their mass and spatial distribution. The planned Euclid space-based observatory will be able to detect a substantial number of microlensing events caused by FFPs towards the Galactic bulge. Making use of a synthetic population algorithm, we investigate the possibility of detecting finite source effects in simulated microlensing events due to FFPs. We find a significant efficiency for finite source effect detection that turns out to be between 20% and 40% for a FFP power law mass function index in the range [0.9, 1.6]. For many of such events it will also be possible to measure the angular Einstein radius and therefore constrain the lens physical parameters. These kinds of observations will also offer a unique possibility to investigate the photosphere and atmosphere of Galactic bulge stars.
Directory of Open Access Journals (Sweden)
R. Schinke
2012-09-01
Full Text Available The analysis and management of flood risk commonly focuses on surface water floods, because these types are often associated with high economic losses due to damage to buildings and settlements. The rising groundwater as a secondary effect of these floods induces additional damage, particularly in the basements of buildings. Mostly, these losses remain underestimated, because they are difficult to assess, especially for the entire building stock of flood-prone urban areas. For this purpose an appropriate methodology has been developed and lead to a groundwater damage simulation model named GRUWAD. The overall methodology combines various engineering and geoinformatic methods to calculate major damage processes by high groundwater levels. It considers a classification of buildings by building types, synthetic depth-damage functions for groundwater inundation as well as the results of a groundwater-flow model. The modular structure of this procedure can be adapted in the level of detail. Hence, the model allows damage calculations from the local to the regional scale. Among others it can be used to prepare risk maps, for ex-ante analysis of future risks, and to simulate the effects of mitigation measures. Therefore, the model is a multifarious tool for determining urban resilience with respect to high groundwater levels.
Öien, Rut F; Forssell, Henrik; Ragnarson Tennvall, Gunnel
2016-10-01
Resource use and costs for topical treatment of hard-to-heal ulcers based on data from the Swedish Registry of Ulcer Treatment (RUT) were analysed in patients recorded in RUT as having healed between 2009 and 2012, in order to estimate potential cost savings from reductions in frequency of dressing changes and healing times. RUT is used to capture areas of improvement in ulcer care and to enable structured wound management by registering patients with hard-to-heal leg, foot and pressure ulcers. Patients included in the registry are treated in primary care, community care, private care, and inpatient hospital care. Cost calculations were based on resource use data on healing time and frequency of dressing changes in Swedish patients with hard-to-heal ulcers who healed between 2009 and 2012. Per-patient treatment costs decreased from SEK38 223 in 2009 to SEK20 496 in 2012, mainly because of shorter healing times. Frequency of dressing changes was essentially the same during these years, varying from 1·4 to 1·6 per week. The total healing time was reduced by 38%. Treatment costs for the management of hard-to-heal ulcers can be reduced with well-developed treatment strategies resulting in shortened healing times as shown in RUT. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems
Directory of Open Access Journals (Sweden)
Feten Gannouni
2017-01-01
Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.
Own-wage labor supply elasticities: variation across time and estimation methods
Directory of Open Access Journals (Sweden)
Olivier Bargain
2016-10-01
Full Text Available Abstract There is a huge variation in the size of labor supply elasticities in the literature, which hampers policy analysis. While recent studies show that preference heterogeneity across countries explains little of this variation, we focus on two other important features: observation period and estimation method. We start with a thorough survey of existing evidence for both Western Europe and the USA, over a long period and from different empirical approaches. Then, our meta-analysis attempts to disentangle the role of time changes and estimation methods. We highlight the key role of time changes, documenting the incredible fall in labor supply elasticities since the 1980s not only for the USA but also in the EU. In contrast, we find no compelling evidence that the choice of estimation method explains variation in elasticity estimates. From our analysis, we derive important guidelines for policy simulations.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Remaining Useful Life Estimation using Time Trajectory Tracking and Support Vector Machines
International Nuclear Information System (INIS)
Galar, D; Kumar, U; Lee, J; Zhao, W
2012-01-01
In this paper, a novel RUL prediction method inspired by feature maps and SVM classifiers is proposed. The historical instances of a system with life-time condition data are used to create a classification by SVM hyper planes. For a test instance of the same system, whose RUL is to be estimated, degradation speed is evaluated by computing the minimal distance defined based on the degradation trajectories, i.e. the approach of the system to the hyper plane that segregates good and bad condition data at different time horizon. Therefore, the final RUL of a specific component can be estimated and global RUL information can then be obtained by aggregating the multiple RUL estimations using a density estimation method.
Directory of Open Access Journals (Sweden)
Tarun R. Katapally, Nazeem Muhajarine
2014-06-01
Full Text Available Accelerometers are predominantly used to objectively measure the entire range of activity intensities – sedentary behaviour (SED, light physical activity (LPA and moderate to vigorous physical activity (MVPA. However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants, jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within ‘valid’ data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA. Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time’s influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and
Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression
Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.
2018-05-01
Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.
Offline estimation of decay time for an optical cavity with a low pass filter cavity model.
Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C
2012-08-01
This Letter presents offline estimation results for the decay-time constant for an experimental Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The cavity dynamics are modeled in terms of a low pass filter (LPF) with unity DC gain. This model is used by an extended Kalman filter (EKF) along with the recorded light intensity at the output of the cavity in order to estimate the decay-time constant. The estimation results using the LPF cavity model are compared to those obtained using the quadrature model for the cavity presented in previous work by Kallapur et al. The estimation process derived using the LPF model comprises two states as opposed to three states in the quadrature model. When considering the EKF, this means propagating two states and a (2×2) covariance matrix using the LPF model, as opposed to propagating three states and a (3×3) covariance matrix using the quadrature model. This gives the former model a computational advantage over the latter and leads to faster execution times for the corresponding EKF. It is shown in this Letter that the LPF model for the cavity with two filter states is computationally more efficient, converges faster, and is hence a more suitable method than the three-state quadrature model presented in previous work for real-time estimation of the decay-time constant for the cavity.
Optimal replacement time estimation for machines and equipment based on cost function
Directory of Open Access Journals (Sweden)
J. Šebo
2013-01-01
Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.
Viikari-Juntura, Eira; Kausto, Johanna; Shiri, Rahman; Kaila-Kangas, Leena; Takala, Esa-Pekka; Karppinen, Jaro; Miranda, Helena; Luukkonen, Ritva; Martimo, Kari-Pekka
2012-03-01
The purpose of this study was to assess the effects of early part-time sick leave on return to work (RTW) and sickness absence among patients with musculoskeletal disorders. A randomized controlled trial was conducted in six occupational health units of medium- and large-size enterprises. Patients aged 18-60 years with musculoskeletal disorders (N=63) unable to perform their regular work were randomly allocated to part- or full-time sick leave. In the former group, workload was reduced by restricting work time by about a half. Remaining work tasks were modified when necessary, as specified in a "fit note" from the physician. The main outcomes were time to return to regular work activities and sickness absence during 12-month follow-up. Time to RTW sustained for ≥4 weeks was shorter in the intervention group (median 12 versus 20 days, P=0.10). Hazard ratio of RTW adjusted for age was 1.60 [95% confidence interval (95% CI) 0.98-2.63] and 1.76 (95% CI 1.21-2.56) after further adjustment for pain interference with sleep and previous sickness absence at baseline. Total sickness absence during the 12-month follow-up was about 20% lower in the intervention than the control group. Compliance with the intervention was high with no discontinuations of part-time sick leave due to musculoskeletal reasons. Early part-time sick leave may provide a faster and more sustainable return to regular duties than full-time sick leave among patients with musculoskeletal disorders. This is the first study to show that work participation can be safely increased with early part-time sick leave.
Estimation and Properties of a Time-Varying GQARCH(1,1-M Model
Directory of Open Access Journals (Sweden)
Sofia Anyfantaki
2011-01-01
analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only ( computational operations, where is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.
Estimating DSGE model parameters in a small open economy: Do real-time data matter?
Directory of Open Access Journals (Sweden)
Capek Jan
2015-03-01
Full Text Available This paper investigates the differences between parameters estimated using real-time and those estimated with revised data. The models used are New Keynesian DSGE models of the Czech, Polish, Hungarian, Swiss, and Swedish small open economies in interaction with the euro area. The paper also offers an analysis of data revisions of GDP growth and inflation and trend revisions of interest rates.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Optimal State Estimation for Discrete-Time Markov Jump Systems with Missing Observations
Directory of Open Access Journals (Sweden)
Qing Sun
2014-01-01
Full Text Available This paper is concerned with the optimal linear estimation for a class of direct-time Markov jump systems with missing observations. An observer-based approach of fault detection and isolation (FDI is investigated as a detection mechanic of fault case. For systems with known information, a conditional prediction of observations is applied and fault observations are replaced and isolated; then, an FDI linear minimum mean square error estimation (LMMSE can be developed by comprehensive utilizing of the correct information offered by systems. A recursive equation of filtering based on the geometric arguments can be obtained. Meanwhile, a stability of the state estimator will be guaranteed under appropriate assumption.
International Nuclear Information System (INIS)
Tonn, B.; Hwang, Ho-Ling; Elliot, S.; Peretz, J.; Bohm, R.; Hendrucko, B.
1994-04-01
This report contains descriptions of methodologies to be used to estimate the one-time generation of hazardous waste associated with five different types of remediation programs: Superfund sites, RCRA Corrective Actions, Federal Facilities, Underground Storage Tanks, and State and Private Programs. Estimates of the amount of hazardous wastes generated from these sources to be shipped off-site to commercial hazardous waste treatment and disposal facilities will be made on a state by state basis for the years 1993, 1999, and 2013. In most cases, estimates will be made for the intervening years, also
Online Estimation of Time-Varying Volatility Using a Continuous-Discrete LMS Algorithm
Directory of Open Access Journals (Sweden)
Jacques Oksman
2008-09-01
Full Text Available The following paper addresses a problem of inference in financial engineering, namely, online time-varying volatility estimation. The proposed method is based on an adaptive predictor for the stock price, built from an implicit integration formula. An estimate for the current volatility value which minimizes the mean square prediction error is calculated recursively using an LMS algorithm. The method is then validated on several synthetic examples as well as on real data. Throughout the illustration, the proposed method is compared with both UKF and offline volatility estimation.
International Nuclear Information System (INIS)
Xu Chaoyang; Liu Junmin; Fan Yanfang; Ji Guohua
2008-01-01
Joint time-frequency analysis is conducted to construct one joint density function of time and frequency. It can open out one signal's frequency components and their evolvements. It is the new evolvement of Fourier analysis. In this paper, according to the characteristic of seismic signal's noise, one estimation method of seismic signal's first arrival based on triple correlation of joint time-frequency spectrum is introduced, and the results of experiment and conclusion are presented. (authors)
Time estimate (topening + tclosing) of shutter of an X-ray equipment using a digital chronometer
International Nuclear Information System (INIS)
Quaresma, D.S.; Oliveira, P.H.T.M.; Gallo, V.F.M.; Jordao, B.O.; Carvalho, R.J.; Cardoso, R.S.; Peixoto, J.G.P.
2014-01-01
In this work the measurement of time t opening + t closing opening and closing the shutter of Pantak HF160 X-ray equipment was performed. It is understood by the shutter device responsible for allowing or not the flow of X-rays that are produced by the X-ray tube through the orifice of a shield. To estimate the running time for a digital chronometer calibrated in the Time Service Division (DSHO) National Observatory (ON) was used. (author)
Nonlinear systems time-varying parameter estimation: Application to induction motors
Energy Technology Data Exchange (ETDEWEB)
Kenne, Godpromesse [Laboratoire d' Automatique et d' Informatique Appliquee (LAIA), Departement de Genie Electrique, IUT FOTSO Victor, Universite de Dschang, B.P. 134 Bandjoun (Cameroon); Ahmed-Ali, Tarek [Ecole Nationale Superieure des Ingenieurs des Etudes et Techniques d' Armement (ENSIETA), 2 Rue Francois Verny, 29806 Brest Cedex 9 (France); Lamnabhi-Lagarrigue, F. [Laboratoire des Signaux et Systemes (L2S), C.N.R.S-SUPELEC, Universite Paris XI, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France); Arzande, Amir [Departement Energie, Ecole Superieure d' Electricite-SUPELEC, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France)
2008-11-15
In this paper, an algorithm for time-varying parameter estimation for a large class of nonlinear systems is presented. The proof of the convergence of the estimates to their true values is achieved using Lyapunov theories and does not require that the classical persistent excitation condition be satisfied by the input signal. Since the induction motor (IM) is widely used in several industrial sectors, the algorithm developed is potentially useful for adjusting the controller parameters of variable speed drives. The method proposed is simple and easily implementable in real-time. The application of this approach to on-line estimation of the rotor resistance of IM shows a rapidly converging estimate in spite of measurement noise, discretization effects, parameter uncertainties (e.g. inaccuracies on motor inductance values) and modeling inaccuracies. The robustness analysis for this IM application also revealed that the proposed scheme is insensitive to the stator resistance variations within a wide range. The merits of the proposed algorithm in the case of on-line time-varying rotor resistance estimation are demonstrated via experimental results in various operating conditions of the induction motor. The experimental results obtained demonstrate that the application of the proposed algorithm to update on-line the parameters of an adaptive controller (e.g. IM and synchronous machines adaptive control) can improve the efficiency of the industrial process. The other interesting features of the proposed method include fault detection/estimation and adaptive control of IM and synchronous machines. (author)
International Nuclear Information System (INIS)
Kansal, Sandeep; Mehra, Rohit; Singh, N.P.
2012-01-01
Indoor radon measurements in 60 dwellings belonging to 12 villages of Sirsa, Fatehbad and Hisar districts of western Haryana, India, have been carried out, using LR-115 type II cellulose nitrate films in the bare mode. The annual average indoor radon value in the studied area varies from 76.00 to 115.46 Bq m −3 , which is well within the recommended action level 200–300 Bq m −3 (). The winter/summer ratio of indoor radon ranges from 0.78 to 2.99 with an average of 1.52. The values of annual average dose received by the residents and Life time fatality risk assessment due to variation of indoor radon concentration in dwellings of studied area suggests that there is no significance threat to the human beings due to the presence of natural radon in the dwellings. - Highlights: ► The radon concentration values in the dwellings are 2–3 times more than the world average of 40 Bq m −3 . ► These values are lower than the recommended action level of 200–300 Bq m −3 (). ► The annual effective dose is less than the recommended action level of 3–10 mSv per year (). ► The values of life time fatality risk determined for the studied area are within safe standards. ► There is no significant threat to the human beings due to the presence of natural radon in the dwellings.
Directory of Open Access Journals (Sweden)
Haiwen Li
2018-01-01
Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
Directory of Open Access Journals (Sweden)
Petrella Angelo
2010-01-01
Full Text Available The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM systems based on offset quadrature amplitude modulation (OQAM in multipath channels is considered. In particular, the joint maximum-likelihood (ML estimator for carrier-frequency offset (CFO, amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
Caprio, M.; Lancieri, M.; Cua, G. B.; Zollo, A.; Wiemer, S.
2011-01-01
We present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. The method consists of two components: 1) estimating seismic moment by finding the low frequency plateau Ω0, the corner frequency fc and attenuation factor (Q) that best fit the observed displacement spectra assuming a Brune ω2 model, and 2) estimating magnitude and its uncertainty based on the estimate of seismic moment. A novel characteristic of this method is that is does not rely on empirically derived relationships, but rather involves direct estimation of quantities related to the moment magnitude. SI magnitude and uncertainty estimates are updated each second following the initial P detection. We tested the SI approach on broadband and strong motion waveforms data from 158 Southern California events, and 25 Japanese events for a combined magnitude range of 3 ≤ M ≤ 7. Based on the performance evaluated on this dataset, the SI approach can potentially provide stable estimates of magnitude within 10 seconds from the initial earthquake detection.
Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian
2018-02-01
Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.
International Nuclear Information System (INIS)
Cannon, Bradford E.; Smith, Charles W.; Isenberg, Philip A.; Vasquez, Bernard J.; Joyce, Colin J.; Murphy, Neil; Nuno, Raquel G.
2017-01-01
In two earlier publications we analyzed 502 intervals of magnetic waves excited by newborn interstellar pickup protons that were observed by the Ulysses spacecraft. Due to the considerable effort required in identifying these events, we provide a list of the times for the 502 wave event intervals previously identified. In the process, we provide a brief description of how the waves were found and what their properties are. We also remind the reader of the conditions that permit the waves to reach observable levels and explain why the waves are not seen more often.