DEFF Research Database (Denmark)
Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms
2009-01-01
PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do not ca...
Laboratory Grouping Based on Previous Courses.
Doemling, Donald B.; Bowman, Douglas C.
1981-01-01
In a five-year study, second-year human physiology students were grouped for laboratory according to previous physiology and laboratory experience. No significant differences in course or board examination performance were found, though correlations were found between predental grade-point averages and grouping. (MSE)
Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History
Directory of Open Access Journals (Sweden)
Danping Wang
2017-01-01
Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.
Motivational activities based on previous knowledge of students
García, J. A.; Gómez-Robledo, L.; Huertas, R.; Perales, F. J.
2014-07-01
Academic results depend strongly on the individual circumstances of students: background, motivation and aptitude. We think that academic activities conducted to increase motivation must be tuned to the special situation of the students. Main goal of this work is analyze the students in the first year of the Degree in Optics and Optometry in the University of Granada and the suitability of an activity designed for those students. Initial data were obtained from a survey inquiring about the reasons to choose this degree, their knowledge of it, and previous academic backgrounds. Results show that: 1) the group is quite heterogeneous, since students have very different background. 2) Reasons to choose the Degree in Optics and Optometry are also very different, and in many cases were selected as a second option. 3) Knowledge and motivations about the Degree are in general quite low. Trying to increase the motivation of the students we designed an academic activity in which we show different topics studied in the Degree. Results show that students that have been involved in this activity are the most motivated and most satisfied with their election of the degree.
Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye
2017-08-01
Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.
Aguirre, A; Hill, A G
1988-01-01
2 trials of the previous child or preceding birth technique in Bamako, Mali, and Lima, Peru, gave very promising results for measurement of infant and early child mortality using data on survivorship of the 2 most recent births. In the Peruvian study, another technique was tested in which each woman was asked about her last 3 births. The preceding birth technique described by Brass and Macrae has rapidly been adopted as a simple means of estimating recent trends in early childhood mortality. The questions formulated and the analysis of results are direct when the mothers are visited at the time of birth or soon after. Several technical aspects of the method believed to introduce unforeseen biases have now been studied and found to be relatively unimportant. But the problems arising when the data come from a nonrepresentative fraction of the total fertile-aged population have not been resolved. The analysis based on data from 5 maternity centers including 1 hospital in Bamako, Mali, indicated some practical problems and the information obtained showed the kinds of subtle biases that can result from the effects of selection. The study in Lima tested 2 abbreviated methods for obtaining recent early childhood mortality estimates in countries with deficient vital registration. The basic idea was that a few simple questions added to household surveys on immunization or diarrheal disease control for example could produce improved child mortality estimates. The mortality estimates in Peru were based on 2 distinct sources of information in the questionnaire. All women were asked their total number of live born children and the number still alive at the time of the interview. The proportion of deaths was converted into a measure of child survival using a life table. Then each woman was asked for a brief history of the 3 most recent live births. Dates of birth and death were noted in month and year of occurrence. The interviews took only slightly longer than the basic survey
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
The problem of estimating frequency response functions and extracting modal parameters is the topic of this paper. A new method based on the Random Decrement technique combined with Fourier transformation and the traditional pure Fourier transformation based approach is compared with regard...... to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... to these measurements the estimation time of the frequency response functions can be compared. The modal parameters estimated by the methods are compared. It is expected that the Random Decrement technique is faster than the traditional method based on pure Fourier Transformations. This is due to the fact...
Attribute and topology based change detection in a constellation of previously detected objects
Paglieroni, David W.; Beer, Reginald N.
2016-01-19
A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.
Energy Technology Data Exchange (ETDEWEB)
Emond, C. [NRC, NAS, WA, DC (United States); Michalek, J.E. [Air Force Research Lab., Brooks City-Base, TX (United States); Birnbaum, L.S.; DeVito, M.J. [PKB, ETD, ORD, NHEERL U.S. EPA, RTP, NC (United States)
2004-09-15
Exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is associated with increased risk for cancer, diabetes and reproductive toxicities in numerous epidemiological studies. Several of these studies base exposure estimates on measurements of blood levels years after the accidental or occupational exposures. Peak exposures have been estimated in these studies assuming a mono or biphasic elimination rate for TCDD, with estimates of half-life ranging from 5 to 12 years. Recent clinical studies suggest that the elimination rate of TCDD is dose dependent. To address this question a physiologically based pharmacokinetic (PBPK) model can be used to predict the concentration of TCDD with a dose-dependent elimination rate. The aims of this study were to validate a dose-dependent elimination rate by using a PBPK model and to adequately predict the concentration of TCDD shortly after the exposure.
Byrne, Michael E; Cortés, Enric; Vaudo, Jeremy J; Harvey, Guy C McN; Sampson, Mark; Wetherbee, Bradley M; Shivji, Mahmood
2017-08-16
Overfishing is a primary cause of population declines for many shark species of conservation concern. However, means of obtaining information on fishery interactions and mortality, necessary for the development of successful conservation strategies, are often fisheries-dependent and of questionable quality for many species of commercially exploited pelagic sharks. We used satellite telemetry as a fisheries-independent tool to document fisheries interactions, and quantify fishing mortality of the highly migratory shortfin mako shark ( Isurus oxyrinchus ) in the western North Atlantic Ocean. Forty satellite-tagged shortfin mako sharks tracked over 3 years entered the Exclusive Economic Zones of 19 countries and were harvested in fisheries of five countries, with 30% of tagged sharks harvested. Our tagging-derived estimates of instantaneous fishing mortality rates ( F = 0.19-0.56) were 10-fold higher than previous estimates from fisheries-dependent data (approx. 0.015-0.024), suggesting data used in stock assessments may considerably underestimate fishing mortality. Additionally, our estimates of F were greater than those associated with maximum sustainable yield, suggesting a state of overfishing. This information has direct application to evaluations of stock status and for effective management of populations, and thus satellite tagging studies have potential to provide more accurate estimates of fishing mortality and survival than traditional fisheries-dependent methodology. © 2017 The Author(s).
Energy Technology Data Exchange (ETDEWEB)
Casey, Daniel
1984-10-01
This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.
Martin, Elizabeth G.; Palmer, Colin
2014-01-01
Air Space Proportion (ASP) is a measure of how much air is present within a bone, which allows for a quantifiable comparison of pneumaticity between specimens and species. Measured from zero to one, higher ASP means more air and less bone. Conventionally, it is estimated from measurements of the internal and external bone diameter, or by analyzing cross-sections. To date, the only pterosaur ASP study has been carried out by visual inspection of sectioned bones within matrix. Here, computed tomography (CT) scans are used to calculate ASP in a small sample of pterosaur wing bones (mainly phalanges) and to assess how the values change throughout the bone. These results show higher ASPs than previous pterosaur pneumaticity studies, and more significantly, higher ASP values in the heads of wing bones than the shaft. This suggests that pneumaticity has been underestimated previously in pterosaurs, birds, and other archosaurs when shaft cross-sections are used to estimate ASP. Furthermore, ASP in pterosaurs is higher than those found in birds and most sauropod dinosaurs, giving them among the highest ASP values of animals studied so far, supporting the view that pterosaurs were some of the most pneumatized animals to have lived. The high degree of pneumaticity found in pterosaurs is proposed to be a response to the wing bone bending stiffness requirements of flight rather than a means to reduce mass, as is often suggested. Mass reduction may be a secondary result of pneumaticity that subsequently aids flight. PMID:24817312
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...
DEFF Research Database (Denmark)
Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten
2014-01-01
show that carefully designed conditions enable cultivation of a representative proportion of human gut bacteria, enabling rapid multiplex phenotypic profiling. We use this approach to determine the phylogenetic distribution of antibiotic tolerance phenotypes for 16 antibiotics in the human gut...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...
Energy Technology Data Exchange (ETDEWEB)
Tedeschi, Enrico; Canna, Antonietta; Cocozza, Sirio; Russo, Carmela; Angelini, Valentina; Brunetti, Arturo [University ' ' Federico II' ' , Neuroradiology, Department of Advanced Biomedical Sciences, Naples (Italy); Palma, Giuseppe; Quarantelli, Mario [National Research Council, Institute of Biostructure and Bioimaging, Naples (Italy); Borrelli, Pasquale; Salvatore, Marco [IRCCS SDN, Naples (Italy); Lanzillo, Roberta; Postiglione, Emanuela; Morra, Vincenzo Brescia [University ' ' Federico II' ' , Department of Neurosciences, Reproductive and Odontostomatological Sciences, Naples (Italy)
2016-12-15
To evaluate changes in T1 and T2* relaxometry of dentate nuclei (DN) with respect to the number of previous administrations of Gadolinium-based contrast agents (GBCA). In 74 relapsing-remitting multiple sclerosis (RR-MS) patients with variable disease duration (9.8±6.8 years) and severity (Expanded Disability Status Scale scores:3.1±0.9), the DN R1 (1/T1) and R2* (1/T2*) relaxation rates were measured using two unenhanced 3D Dual-Echo spoiled Gradient-Echo sequences with different flip angles. Correlations of the number of previous GBCA administrations with DN R1 and R2* relaxation rates were tested, including gender and age effect, in a multivariate regression analysis. The DN R1 (normalized by brainstem) significantly correlated with the number of GBCA administrations (p<0.001), maintaining the same significance even when including MS-related factors. Instead, the DN R2* values correlated only with age (p=0.003), and not with GBCA administrations (p=0.67). In a subgroup of 35 patients for whom the administered GBCA subtype was known, the effect of GBCA on DN R1 appeared mainly related to linear GBCA. In RR-MS patients, the number of previous GBCA administrations correlates with R1 relaxation rates of DN, while R2* values remain unaffected, suggesting that T1-shortening in these patients is related to the amount of Gadolinium given. (orig.)
Directory of Open Access Journals (Sweden)
Nasrin Saharkhiz
2014-11-01
Full Text Available Background: Embryo transfer (ET is one of the most important steps in assisted reproductive technology (ART cycles and affected by many factors namely the depth of embryo deposition in uterus. In this study, the outcomes of intracytoplasmic sperm injection (ICSI cycles after blind embryo transfer and embryo transfer based on previously measured uterine length using vaginal ultrasound were compared. Materials and Methods: This prospective randomised clinical trial included one hundred and forty non-donor fresh embryo transfers during January 2010 to June 2011. In group I, ET was performed using conventional (blind method at 5-6cm from the external os, and in group II, ET was done at a depth of 1-1.5 cm from the uterine fundus based on previously measured uterine length using vaginal sonography. Appropriate statistical analysis was performed using Student’s t test and Chi-square or Fisher’s exact test. The software that we used was PASW statistics version 18. A p value <0.05 was considered statistically significant. Results: Chemical pregnancy rate was 28.7% in group I and 42.1% in group II, while the difference was not statistically significant (p=0.105. Clinical pregnancy, ongoing pregnancy and implantation rates for group I were 21.2%, 17.7%, and 12.8%, while for group II were 33.9%, 33.9%, and 22.1, respectively. In group I and group II, abortion rates were 34.7% and 0%, respectively, indicating a statistically significant difference (p<0.005. No ectopic pregnancy occurred in two groups. Conclusion: The use of uterine length measurement during treatment cycle in order to place embryos at depth of 1-1.5cm from fundus significantly increases clinical and ongoing pregnancy and implantation rates, while leads to a decrease in abortion rate (Registration Number: IRCT2014032512494N1.
Estimating North Dakota's Economic Base
Coon, Randal C.; Leistritz, F. Larry
2009-01-01
North Dakota’s economic base is comprised of those activities producing a product paid for by nonresidents, or products exported from the state. North Dakota’s economic base activities include agriculture, mining, manufacturing, tourism, and federal government payments for construction and to individuals. Development of the North Dakota economic base data is important because it provides the information to quantify the state’s economic growth, and it creates the final demand sectors for the N...
Fuzzy logic based ELF magnetic field estimation in substations
International Nuclear Information System (INIS)
Kosalay, I.
2008-01-01
This paper examines estimation of the extremely low frequency magnetic fields (MF) in the power substation. First, the results of the previous relevant research studies and the MF measurements in a sample power substation are presented. Then, a fuzzy logic model based on the geometric definitions in order to estimate the MF distribution is explained. Visual software, which has a three-dimensional screening unit, based on the fuzzy logic technique, has been developed. (authors)
Fuzzy logic based ELF magnetic field estimation in substations.
Kosalay, Ilhan
2008-01-01
This paper examines estimation of the extremely low frequency magnetic fields (MF) in the power substation. First, the results of the previous relevant research studies and the MF measurements in a sample power substation are presented. Then, a fuzzy logic model based on the geometric definitions in order to estimate the MF distribution is explained. Visual software, which has a three-dimensional screening unit, based on the fuzzy logic technique, has been developed.
The impact of previous knee injury on force plate and field-based measures of balance.
Baltich, Jennifer; Whittaker, Jackie; Von Tscharner, Vinzenz; Nettel-Aguirre, Alberto; Nigg, Benno M; Emery, Carolyn
2015-10-01
Individuals with post-traumatic osteoarthritis demonstrate increased sway during quiet stance. The prospective association between balance and disease onset is unknown. Improved understanding of balance in the period between joint injury and disease onset could inform secondary prevention strategies to prevent or delay the disease. This study examines the association between youth sport-related knee injury and balance, 3-10years post-injury. Participants included 50 individuals (ages 15-26years) with a sport-related intra-articular knee injury sustained 3-10years previously and 50 uninjured age-, sex- and sport-matched controls. Force-plate measures during single-limb stance (center-of-pressure 95% ellipse-area, path length, excursion, entropic half-life) and field-based balance scores (triple single-leg hop, star-excursion, unipedal dynamic balance) were collected. Descriptive statistics (mean within-pair difference; 95% confidence intervals) were used to compare groups. Linear regression (adjusted for injury history) was used to assess the relationship between ellipse-area and field-based scores. Injured participants on average demonstrated greater medio-lateral excursion [mean within-pair difference (95% confidence interval); 2.8mm (1.0, 4.5)], more regular medio-lateral position [10ms (2, 18)], and shorter triple single-leg hop distances [-30.9% (-8.1, -53.7)] than controls, while no between group differences existed for the remaining outcomes. After taking into consideration injury history, triple single leg hop scores demonstrated a linear association with ellipse area (β=0.52, 95% confidence interval 0.01, 1.01). On average the injured participants adjusted their position less frequently and demonstrated a larger magnitude of movement during single-limb stance compared to controls. These findings support the evaluation of balance outcomes in the period between knee injury and post-traumatic osteoarthritis onset. Copyright © 2015 Elsevier Ltd. All rights
Late preterm birth and previous cesarean section: a population-based cohort study.
Yasseen Iii, Abdool S; Bassil, Kate; Sprague, Ann; Urquia, Marcelo; Maguire, Jonathon L
2018-02-21
Late preterm birth (LPB) is increasingly common and associated with higher morbidity and mortality than term birth. Yet, little is known about the influence of previous cesarean section (PCS) and the occurrence of LPB in subsequent pregnancies. We aim to evaluate this association along with the potential mediation by cesarean sections in the current pregnancy. We use population-based birth registry data (2005-2012) to establish a cohort of live born singleton infants born between 34 and 41 gestational weeks to multiparous mothers. PCS was the primary exposure, LPB (34-36 weeks) was the primary outcome, and an unplanned or emergency cesarean section in the current pregnancy was the potential mediator. Associations were quantified using propensity weighted multivariable Poisson regression, and mediating associations were explored using the Baron-Kenny approach. The cohort included 481,531 births, 21,893 (4.5%) were LPB, and 119,983 (24.9%) were predated by at least one PCS. Among mothers with at least one PCS, 6307 (5.26%) were LPB. There was increased risk of LPB among women with at least one PCS (adjusted Relative Risk (aRR): 1.20 (95%CI [1.16, 1.23]). Unplanned or emergency cesarean section in the current pregnancy was identified as a strong mediator to this relationship (mediation ratio = 97%). PCS was associated with higher risk of LPB in subsequent pregnancies. This may be due to an increased risk of subsequent unplanned or emergency preterm cesarean sections. Efforts to minimize index cesarean sections may reduce the risk of LPB in subsequent pregnancies.
Channel Estimation in DCT-Based OFDM
Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing
2014-01-01
This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439
Solar radiation estimation based on the insolation
International Nuclear Information System (INIS)
Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.
1998-01-01
A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt
International Nuclear Information System (INIS)
Shaikh, S.; Devrajani, B.R.; Kalhoro, M.
2012-01-01
Objective: To determine the efficacy of peg-interferon-based therapy in patients refractory to previous conventional interferon-based treatment and factors predicting sustained viral response (SVR). Study Design: Analytical study. Place and Duration of Study: Medical Unit IV, Liaquat University Hospital, Jamshoro, from July 2009 to June 2011. Methodology: This study included consecutive patients of hepatitis C who were previously treated with conventional interferon-based treatment for 6 months but were either non-responders, relapsed or had virologic breakthrough and stage = 2 with fibrosis on liver biopsy. All eligible patients were provided peg-interferon at the dosage of 180 mu g weekly with ribavirin thrice a day for 6 months. Sustained Viral Response (SVR) was defined as absence of HCV RNA at twenty four week after treatment. All data was processed on SPSS version 16. Results: Out of 450 patients enrolled in the study, 192 were excluded from the study on the basis of minimal fibrosis (stage 0 and 1). Two hundred and fifty eight patients fulfilled the inclusion criteria and 247 completed the course of peg-interferon treatment. One hundred and sixty one (62.4%) were males and 97 (37.6%) were females. The mean age was 39.9 +- 6.1 years, haemoglobin was 11.49 +- 2.45 g/dl, platelet count was 127.2 +- 50.6 10/sup 3/ /mm/sup 3/, ALT was 99 +- 65 IU/L. SVR was achieved in 84 (32.6%). The strong association was found between SVR and the pattern of response (p = 0. 001), degree of fibrosis and early viral response (p = 0.001). Conclusion: Peg-interferon based treatment is an effective and safe treatment option for patients refractory to conventional interferon-based treatment. (author)
DEFF Research Database (Denmark)
Risgaard, Bjarke; Waagstein, Kristine; Winkel, Bo Gregers
2015-01-01
Introduction: Psychiatric patients have premature mortality compared to the general population. The incidence of sudden cardiac death (SCD) in psychiatric patients is unknown in a nationwide setting. The aim of this study was to compare nationwide SCD incidence rates in young individuals with and......Introduction: Psychiatric patients have premature mortality compared to the general population. The incidence of sudden cardiac death (SCD) in psychiatric patients is unknown in a nationwide setting. The aim of this study was to compare nationwide SCD incidence rates in young individuals...... with and without previous psychiatric disease. Method: Nationwide, retrospective cohort study including all deaths in people aged 18–35 years in 2000–2006 in Denmark. The unique Danish death certificates and autopsy reports were used to identify SCD cases. Psychiatric disease was defined as a previous psychiatric...
Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments
Neil Aldrin
2017-01-01
The aims of the present research are: 1) to know that product buying decision possibly occurs, 2) to know how product buying decision occurs on Lazada e-commerce’s customers, 3) how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea ...
DEFF Research Database (Denmark)
Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu
2014-01-01
BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...
Subspace Based Blind Sparse Channel Estimation
DEFF Research Database (Denmark)
Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki
2012-01-01
The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...
Torgén, M; Winkel, J; Alfredsson, L; Kilbom, A
1999-06-01
The principal aim of the present study was to evaluate questionnaire-based information on past physical work loads (6-year recall). Effects of memory difficulties on reproducibility were evaluated for 82 subjects by comparing previously reported results on current work loads (test-retest procedure) with the same items recalled 6 years later. Validity was assessed by comparing self-reports in 1995, regarding work loads in 1989, with worksite measurements performed in 1989. Six-year reproducibility, calculated as weighted kappa coefficients (k(w)), varied between 0.36 and 0.86, with the highest values for proportion of the workday spent sitting and for perceived general exertion and the lowest values for trunk and neck flexion. The six-year reproducibility results were similar to previously reported test-retest results for these items; this finding indicates that memory difficulties was a minor problem. The validity of the questionnaire responses, expressed as rank correlations (r(s)) between the questionnaire responses and workplace measurements, varied between -0.16 and 0.78. The highest values were obtained for the items sitting and repetitive work, and the lowest and "unacceptable" values were for head rotation and neck flexion. Misclassification of exposure did not appear to be differential with regard to musculoskeletal symptom status, as judged by the calculated risk estimates. The validity of some of these self-administered questionnaire items appears sufficient for a crude assessment of physical work loads in the past in epidemiologic studies of the general population with predominantly low levels of exposure.
Evaluating Expert Estimators Based on Elicited Competences
Directory of Open Access Journals (Sweden)
Hrvoje Karna
2015-07-01
Full Text Available Utilization of expert effort estimation approach shows promising results when it is applied to software development process. It is based on judgment and decision making process and due to comparative advantages extensively used especially in situations when classic models cannot be accounted for. This becomes even more accentuated in today’s highly dynamical project environment. Confronted with these facts companies are placing ever greater focus on their employees, specifically on their competences. Competences are defined as knowledge, skills and abilities required to perform job assignments. During effort estimation process different underlying expert competences influence the outcome i.e. judgments they express. Special problem here is the elicitation, from an input collection, of those competences that are responsible for accurate estimates. Based on these findings different measures can be taken to enhance estimation process. The approach used in study presented in this paper was targeted at elicitation of expert estimator competences responsible for production of accurate estimates. Based on individual competences scores resulting from performed modeling experts were ranked using weighted scoring method and their performance evaluated. Results confirm that experts with higher scores in competences identified by applied models in general exhibit higher accuracy during estimation process. For the purpose of modeling data mining methods were used, specifically the multilayer perceptron neural network and the classification and regression decision tree algorithms. Among other, applied methods are suitable for the purpose of elicitation as in a sense they mimic the ways human brains operate. Data used in the study was collected from real projects in the company specialized for development of IT solutions in telecom domain. The proposed model, applied methodology for elicitation of expert competences and obtained results give evidence that in
Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments
Directory of Open Access Journals (Sweden)
Neil Aldrin
2017-06-01
Full Text Available The aims of the present research are: 1 to know that product buying decision possibly occurs, 2 to know how product buying decision occurs on Lazada e-commerce’s customers, 3 how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea and opinion. Research result shows that product which has many ratings and reviews will trigger other buyers to purchase or get that product. The conclusion is that product buying decision may occur because there are some processes before making decision which are: looking for recognition and searching for problems, knowing the needs, collecting information, evaluating alternative, evaluating after buying. In those stages, buying decision on Lazada e-commerce is supported by price, promotion, service, and brand.
Unit root tests based on M estimators
Lucas, André
1995-01-01
This paper considers unit root tests based on M estimators. The asymptotic theory for these tests is developed. It is shown how the asymptotic distributions of the tests depend on nuisance parameters and how tests can be constructed that are invariant to these parameters. It is also shown that a
Entropy-based adaptive attitude estimation
Kiani, Maryam; Barzegar, Aylin; Pourtakdoust, Seid H.
2018-03-01
Gaussian approximation filters have increasingly been developed to enhance the accuracy of attitude estimation in space missions. The effective employment of these algorithms demands accurate knowledge of system dynamics and measurement models, as well as their noise characteristics, which are usually unavailable or unreliable. An innovation-based adaptive filtering approach has been adopted as a solution to this problem; however, it exhibits two major challenges, namely appropriate window size selection and guaranteed assurance of positive definiteness for the estimated noise covariance matrices. The current work presents two novel techniques based on relative entropy and confidence level concepts in order to address the abovementioned drawbacks. The proposed adaptation techniques are applied to two nonlinear state estimation algorithms of the extended Kalman filter and cubature Kalman filter for attitude estimation of a low earth orbit satellite equipped with three-axis magnetometers and Sun sensors. The effectiveness of the proposed adaptation scheme is demonstrated by means of comprehensive sensitivity analysis on the system and environmental parameters by using extensive independent Monte Carlo simulations.
ON ESTIMATING FORCE-FREENESS BASED ON OBSERVED MAGNETOGRAMS
Energy Technology Data Exchange (ETDEWEB)
Zhang, X. M.; Zhang, M.; Su, J. T., E-mail: xmzhang@nao.cas.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, A20 Datun Road, Chaoyang District, Beijing 100012 (China)
2017-01-01
It is a common practice in the solar physics community to test whether or not measured photospheric or chromospheric vector magnetograms are force-free, using the Maxwell stress as a measure. Some previous studies have suggested that magnetic fields of active regions in the solar chromosphere are close to being force-free whereas there is no consistency among previous studies on whether magnetic fields of active regions in the solar photosphere are force-free or not. Here we use three kinds of representative magnetic fields (analytical force-free solutions, modeled solar-like force-free fields, and observed non-force-free fields) to discuss how measurement issues such as limited field of view (FOV), instrument sensitivity, and measurement error could affect the estimation of force-freeness based on observed magnetograms. Unlike previous studies that focus on discussing the effect of limited FOV or instrument sensitivity, our calculation shows that just measurement error alone can significantly influence the results of estimates of force-freeness, due to the fact that measurement errors in horizontal magnetic fields are usually ten times larger than those in vertical fields. This property of measurement errors, interacting with the particular form of a formula for estimating force-freeness, would result in wrong judgments of the force-freeness: a truly force-free field may be mistakenly estimated as being non-force-free and a truly non-force-free field may be estimated as being force-free. Our analysis calls for caution when interpreting estimates of force-freeness based on measured magnetograms, and also suggests that the true photospheric magnetic field may be further away from being force-free than it currently appears to be.
Directory of Open Access Journals (Sweden)
Kristina Dalberg
2006-09-01
Full Text Available Data on birth outcome and offspring health after the appearance of breast cancer are limited. The aim of this study was to assess the risk of adverse birth outcomes in women previously treated for invasive breast cancer compared with the general population of mothers.Of all 2,870,932 singleton births registered in the Swedish Medical Birth Registry during 1973-2002, 331 first births following breast cancer surgery--with a mean time to pregnancy of 37 mo (range 7-163--were identified using linkage with the Swedish Cancer Registry. Logistic regression analysis was used. The estimates were adjusted for maternal age, parity, and year of delivery. Odds ratios (ORs and 95% confidence intervals (CIs were used to estimate infant health and mortality, delivery complications, the risk of preterm birth, and the rates of instrumental delivery and cesarean section. The large majority of births from women previously treated for breast cancer had no adverse events. However, births by women exposed to breast cancer were associated with an increased risk of delivery complications (OR 1.5, 95% CI 1.2-1.9, cesarean section (OR 1.3, 95% CI 1.0-1.7, very preterm birth (<32 wk (OR 3.2, 95% CI 1.7-6.0, and low birth weight (<1500 g (OR 2.9, 95% CI 1.4-5.8. A tendency towards an increased risk of malformations among the infants was seen especially in the later time period (1988-2002 (OR 2.1, 95% CI 1.2-3.7.It is reassuring that births overall were without adverse events, but our findings indicate that pregnancies in previously treated breast cancer patients should possibly be regarded as higher risk pregnancies, with consequences for their surveillance and management.
ESTIMATION OF STATURE BASED ON FOOT LENGTH
Directory of Open Access Journals (Sweden)
Vidyullatha Shetty
2015-01-01
Full Text Available BACKGROUND : Stature is the height of the person in the upright posture. It is an important measure of physical identity. Estimation of body height from its segments or dismember parts has important considerations for identifications of living or dead human body or remains recovered from disasters or other similar conditions. OBJECTIVE : Stature is an important indicator for identification. There are numerous means to establish stature and their significance lies in the simplicity of measurement, applicability and accuracy in prediction. Our aim of the study was to review the relationship between foot length and body height. METHODS : The present study reviews various prospective studies which were done to estimate the stature. All the measurements were taken by using standard measuring devices and standard anthropometric techniques. RESULTS : This review shows there is a correlation between stature and foot dimensions it is found to be positive and statistically highly significant. Prediction of stature was found to be most accurate by multiple regression analysis. CONCLUSIONS : Stature and gender estimation can be done by using foot measurements and stud y will help in medico - legal cases in establishing identity of an individual and this would be useful for Anatomists and Anthropologists to calculate stature based on foot length
Dictionary-based fiber orientation estimation with improved spatial consistency.
Ye, Chuyang; Prince, Jerry L
2018-02-01
Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that
MMSE based map estimation for image denoising
Om, Hari; Biswas, Mantosh
2014-04-01
Denoising of a natural image corrupted by the additive white Gaussian noise (AWGN) is a classical problem in image processing. The NeighShrink [17,18], LAWML [19], BiShrink [20,21], IIDMWT [23], IAWDMNC [25], and GIDMNWC [24] denoising algorithms remove the noise from the noisy wavelet coefficients using thresholding by retaining only the large coefficients and setting the remaining to zero. Generally the threshold depends mainly on the variance, image size, and image decomposition levels. The performances of these methods are not very effective as they are not spatially adaptive i.e., the parameters considered are not smoothly varied in the neighborhood window. Our proposed method overcomes this weakness by using minimum mean square error (MMSE) based maximum a posterior (MAP) estimation. In this paper, we modify the parameters such as variance of the classical MMSE estimator in the neighborhood window of the noisy wavelet coefficients to remove the noise effectively. We demonstrate experimentally that our method outperforms the NeighShrink, LAWML, BiShrink, IIDMWT, IAWDMNC, and GIDMNWC methods in terms of the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). It is more effective particularly for the highly corrupted natural images.
A Nonlinear Attitude Estimator for Attitude and Heading Reference Systems Based on MEMS Sensors
DEFF Research Database (Denmark)
Wang, Yunlong; Soltani, Mohsen; Hussain, Dil muhammed Akbar
2016-01-01
In this paper, a nonlinear attitude estimator is designed for an Attitude Heading and Reference System (AHRS) based on Micro Electro-Mechanical Systems (MEMS) sensors. The design process of the attitude estimator is stated with detail, and the equilibrium point of the estimator error model...... the problems in previous research works. Moreover, the estimation of MEMS gyroscope bias is also inclueded in this estimator. The designed nonlinear attitude estimator is firstly tested in simulation environment and then implemented in an AHRS hardware for further experiments. Finally, the attitude estimation...
Access Based Cost Estimation for Beddown Analysis
National Research Council Canada - National Science Library
Pennington, Jasper E
2006-01-01
The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...
Estimate-Merge-Technique-based algorithms to track an underwater ...
Indian Academy of Sciences (India)
In this paper, two novel methods based on the Estimate Merge Technique are proposed. The Estimate Merge Technique involves a process of getting a final estimate by the fusion of a posteriori estimates given by different nonlinear estimates, which are in turn driven by the towed array bearing-only measurements.
Eagle, Shawn R; Connaboy, Chris; Nindl, Bradley C; Allison, Katelyn F
2018-02-01
Musculoskeletal injuries to the extremities are a primary concern for the United States (US) military. One possible injury risk factor in this population is side-to-side strength imbalance. To examine the odds of reporting a previous shoulder injury in US Marine Corps Ground Combat Element Integrated Task Force volunteers based on side-to-side strength differences in isokinetic shoulder strength. Cohort study; Level of evidence, 3. Male (n = 219) and female (n = 91) Marines were included in this analysis. Peak torque values from 5 shoulder internal/external rotation repetitions were averaged and normalized to body weight. The difference in side-to-side strength measurements was calculated as the absolute value of the limb difference divided by the mean peak torque of the dominant limb. Participants were placed into groups based on the magnitude of these differences: 20%. Odds ratios (ORs) and 95% CIs were calculated. When separated by sex, 13.2% of men reported an injury, while 5.5% of women reported an injury. Female Marines with >20% internal rotation side-to-side strength differences demonstrated increased odds of reporting a previous shoulder injury compared with female Marines with reporting a previous shoulder injury compared with those with lesser magnitude differences. Additionally, female sex appears to drastically affect the increased odds of reporting shoulder injuries (OR, 13.9-15.4) with larger magnitude differences (ie, >20%) compared with those with lesser magnitude differences (ie, <10% and 10%-20%). The retrospective cohort design of this study cannot delineate cause and effect but establishes a relationship between female Marines and greater odds of larger magnitude strength differences after returning from an injury.
Heraud, J. A.; Centa, V. A.; Bleier, T.
2017-12-01
During the past four years, magnetometers deployed in the Peruvian coast have been providing evidence that the ULF pulses received are indeed generated at the subduction or Benioff zone and are connected with the occurrence of earthquakes within a few kilometers of the source of such pulses. This evidence was presented at the AGU 2015 Fall meeting, showing the results of triangulation of pulses from two magnetometers located in the central area of Peru, using data collected during a two-year period. Additional work has been done and the method has now been expanded to provide the instantaneous energy released at the stress areas on the Benioff zone during the precursory stage, before an earthquake occurs. Collected data from several events and in other parts of the country will be shown in a sequential animated form that illustrates the way energy is released in the ULF part of the electromagnetic spectrum. The process has been extended in time and geographical places. Only pulses associated with the occurrence of earthquakes are taken into account in an area which is highly associated with subduction-zone seismic events and several pulse parameters have been used to estimate a function relating the magnitude of the earthquake with the value of a function generated with those parameters. The results shown, including the animated data video, constitute additional work towards the estimation of the magnitude of an earthquake about to occur, based on electromagnetic pulses that originated at the subduction zone. The method is providing clearer evidence that electromagnetic precursors in effect conveys physical and useful information prior to the advent of a seismic event
International Nuclear Information System (INIS)
Lubis, L.I.; Dincer, I.; Rosen, M.A.
2008-01-01
An extension of a previous Life Cycle Assessment (LCA) of nuclear-based hydrogen production using thermochemical water decomposition is reported. The copper-chlorine thermochemical cycle is considered, and the environmental impacts of the nuclear and thermochemical plants are assessed, while future needs are identified. Environmental impacts are investigated using CML 2001 impact categories. The nuclear fuel cycle and construction of the hydrogen plant contribute significantly to total environmental impacts. The environmental impacts for the operation of the thermochemical hydrogen production plant contribute much less. Changes in the inventory of chemicals needed in the thermochemical plant do not affect significantly the total impacts. Improvement analysis suggests the development of more sustainable processes, particularly in the nuclear plant. Other important and necessary future extensions of the research reported are also provided. (author)
SPECTRAL data-based estimation of soil heat flux
Singh, Ramesh K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.
2011-01-01
Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.
Monte Carlo-Based Tail Exponent Estimator
Czech Academy of Sciences Publication Activity Database
Baruník, Jozef; Vácha, Lukáš
2010-01-01
Roč. 2010, č. 6 (2010), s. 1-26 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045; GA ČR GP402/08/P207 Institutional research plan: CEZ:AV0Z10750506 Keywords : Hill estimator * α-stable distributions * tail exponent estimation Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/barunik-0342493.pdf
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Tartaglione, Luciana; Gambuti, Angelita; De Cicco, Paola; Ercolano, Giuseppe; Ianaro, Angela; Taglialatela-Scafati, Orazio; Moio, Luigi; Forino, Martino
2018-03-01
Vitis vinifera cv Falanghina is an ancient grape variety of Southern Italy. A thorough phytochemical analysis of the Falanghina leaves was conducted to investigate its specialised metabolite content. Along with already known molecules, such as caftaric acid, quercetin-3-O-β-d-glucopyranoside, quercetin-3-O-β-d-glucuronide, kaempferol-3-O-β-d-glucopyranoside and kaempferol-3-O-β-d-glucuronide, a previously undescribed biflavonoid was identified. For this last compound, a moderate bioactivity against metastatic melanoma cells proliferation was discovered. This datum can be of some interest to researchers studying human melanoma. The high content in antioxidant glycosylated flavonoids supports the exploitation of grape vine leaves as an inexpensive source of natural products for the food industry and for both pharmaceutical and nutraceutical companies. Additionally, this study offers important insights into the plant physiology, thus prompting possible technological researches of genetic selection based on the vine adaptation to specific pedo-climatic environments. Copyright © 2017 Elsevier B.V. All rights reserved.
Monte Carlo-based tail exponent estimator
Czech Academy of Sciences Publication Activity Database
Baruník, Jozef; Vácha, Lukáš
2010-01-01
Roč. 389, č. 21 (2010), s. 4863-4874 ISSN 0378-4371 R&D Projects: GA ČR GA402/09/0965; GA ČR GD402/09/H045; GA ČR GP402/08/P207 Institutional research plan: CEZ:AV0Z10750506 Keywords : Hill estimator * α-stable distributions * Tail exponent estimation Subject RIV: AH - Economics Impact factor: 1.521, year: 2010 http://library.utia.cas.cz/separaty/2010/E/barunik-0346486.pdf
Prediction-based estimating functions: Review and new developments
DEFF Research Database (Denmark)
Sørensen, Michael
2011-01-01
The general theory of prediction-based estimating functions for stochastic process models is reviewed and extended. Particular attention is given to optimal estimation, asymptotic theory and Gaussian processes. Several examples of applications are presented. In particular, partial observation...
Bootstrap-Based Inference for Cube Root Consistent Estimators
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi
This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known to be inconsis......This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...
Web-based Interspecies Correlation Estimation
Web-ICE estimates acute toxicity (LC50/LD50) of a chemical to a species, genus, or family from the known toxicity of the chemical to a surrogate species. Web-ICE has modules to predict acute toxicity to aquatic (fish and invertebrates) and wildlife (birds and mammals) taxa for us...
Rang, C U; Licht, T R; Midtvedt, T; Conway, P L; Chao, L; Krogfelt, K A; Cohen, P S; Molin, S
1999-05-01
The growth physiology of Escherichia coli during colonization of the intestinal tract was studied with four animal models: the streptomycin-treated mouse carrying a reduced microflora, the monoassociated mouse with no other microflora than the introduced strain, the conventionalized streptomycin-treated mouse, and the conventionalized monoassociated mouse harboring a full microflora. A 23S rRNA fluorescent oligonucleotide probe was used for hybridization to whole E. coli cells fixed directly after being taken from the animals, and the respective growth rates of E. coli BJ4 in the four animal models were estimated by correlating the cellular concentrations of ribosomes with the growth rate of the strain. The growth rates thus estimated from the ribosomal content of E. coli BJ4 in vivo did not differ in the streptomycin-treated and the monoassociated mice. After conventionalization there was a slight decrease of the bacterial growth rates in both animal models.
Macroscopic Traffic State Estimation: Understanding Traffic Sensing Data-Based Estimation Errors
Directory of Open Access Journals (Sweden)
Paul B. C. van Erp
2017-01-01
Full Text Available Traffic state estimation is a crucial element in traffic management systems and in providing traffic information to road users. In this article, we evaluate traffic sensing data-based estimation error characteristics in macroscopic traffic state estimation. We consider two types of sensing data, that is, loop-detector data and probe speed data. These data are used to estimate the mean speed in a discrete space-time mesh. We assume that there are no errors in the sensing data. This allows us to study the errors resulting from the differences in characteristics between the sensing data and desired estimate together with the incomplete description of the relation between the two. The aim of the study is to evaluate the dependency of this estimation error on the traffic conditions and sensing data characteristics. For this purpose, we use microscopic traffic simulation, where we compare the estimates with the ground truth using Edie’s definitions. The study exposes a relation between the error distribution characteristics and traffic conditions. Furthermore, we find that it is important to account for the correlation between individual probe data-based estimation errors. Knowledge related to these estimation errors contributes to making better use of the available sensing data in traffic state estimation.
DEFF Research Database (Denmark)
Andreasen, Charlotte Hartig; Nielsen, Jonas B; Refsgaard, Lena
2013-01-01
variants in the NHLBI-Go Exome Sequencing Project (ESP) containing exome data from 6500 individuals. In ESP, we identified 94 variants out of 687 (14%) variants previously associated with HCM, 58 out of 337 (17%) variants associated with DCM, and 38 variants out of 209 (18%) associated with ARVC...... with these cardiomyopathies, but the disease-causing effect of reported variants is often dubious. In order to identify possible false-positive variants, we investigated the prevalence of previously reported cardiomyopathy-associated variants in recently published exome data. We searched for reported missense and nonsense...... times higher than expected from the phenotype prevalences in the general population (HCM 1:500, DCM 1:2500, and ARVC 1:5000) and our data suggest that a high number of these variants are not monogenic causes of cardiomyopathy....
A non-stationary cost-benefit based bivariate extreme flood estimation approach
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Baker, Stuart G
2018-02-20
A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Optimal difference-based estimation for partially linear models
Zhou, Yuejin
2017-12-16
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
DEFF Research Database (Denmark)
Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke
2010-01-01
The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....
DEFF Research Database (Denmark)
Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke
2010-01-01
The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so.......The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....
Directory of Open Access Journals (Sweden)
Veselinovic Nenad
2005-01-01
Full Text Available The equivalent diversity order of multiuser detector employing multiple receive antennas and minimum mean squared error (MMSE processing for frequency-selective channels is decreased if it aims at suppressing unknown cochannel interference (UCCI while detecting multiple users' signals. This is an unavoidable consequence of linear processing at the receiver. In this paper, we propose a new multiuser signal detection scheme with the aim to preserve the detector's diversity order by taking into account the structure of the UCCI. We use the fact that the structure of the UCCI appears in the probability density function (PDF of the UCCI plus noise, which can be characterized as multimodal Gaussian. A kernel smoothing PDF estimation based receiver is derived. The PDF estimation can be based on training symbols only (noniterative PDF estimation or on training symbols as well as feedback from the decoder (iterative PDF estimation. It is verified through simulations that the proposed receiver significantly outperforms the conventional covariance estimation in channels with low frequency selectivity. The iterative PDF estimation significantly outperforms the noniterative PDF estimation-based receiver with minor training overhead.
Vision-Based Position Estimation Utilizing an Extended Kalman Filter
2016-12-01
POSITION ESTIMATION UTILIZING AN EXTENDED KALMAN FILTER by Joseph B. Testa III December 2016 Thesis Advisor: Vladimir Dobrokhodov Co...TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE VISION-BASED POSITION ESTIMATION UTILIZING AN EXTENDED KALMAN FILTER 5. FUNDING...spots” and network relay between the boarding team and ship. 14. SUBJECT TERMS UAV, ROS, extended Kalman filter , Matlab
Time of arrival estimation in pulsar-based navigation systems
Kabakchiev, Chr.; Behar, V.; Buist, P.; Garvanov, I.; Kabakchieva, D.; Bentum, Marinus Jan
2015-01-01
This paper focuses on the Time of Arrival (TOA) estimation problem related to new application of pulsar signals for airplane-based navigation. The aim of the paper is to propose and evaluate a possible algorithm for TOA estimation that consists of epoch folding, filtering, CFAR detection,
METAPHOR: Probability density estimation for machine learning based photometric redshifts
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2016-01-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to
Estimating security betas using prior information based on firm fundamentals
Cosemans, M.; Frehen, R.; Schotman, P.C.; Bauer, R.
2010-01-01
This paper proposes a novel approach for estimating time-varying betas of individual stocks that incorporates prior information based on fundamentals. We shrink the rolling window estimate of beta towards a firm-specific prior that is motivated by asset pricing theory. The prior captures structural
Particle filter based MAP state estimation: A comparison
Saha, S.; Boers, Y.; Driessen, J.N.; Mandal, Pranab K.; Bagchi, Arunabha
2009-01-01
MAP estimation is a good alternative to MMSE for certain applications involving nonlinear non Gaussian systems. Recently a new particle filter based MAP estimator has been derived. This new method extracts the MAP directly from the output of a running particle filter. In the recent past, a Viterbi
Response-Based Estimation of Sea State Parameters
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The sea state parameters can be estimated by Bayesian Modelling which uses complex-valued frequency response functions (FRF) to estimate the wave spectrum on the basis...... of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...
Estimating High-Frequency Based (Co-) Variances: A Unified Approach
DEFF Research Database (Denmark)
Voev, Valeri; Nolte, Ingmar
We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...
Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging
DEFF Research Database (Denmark)
Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben
2013-01-01
When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device wa...
Green's function based density estimation
Energy Technology Data Exchange (ETDEWEB)
Kovesarki, Peter; Brock, Ian C.; Nuncio Quiroz, Adriana Elizabeth [Physikalisches Institut, Universitaet Bonn (Germany)
2012-07-01
A method was developed based on Green's function identities to estimate probability densities. This can be used for likelihood estimations and for binary classifications. It offers several advantages over neural networks, boosted decision trees and other, regression based classifiers. For example, it is less prone to overtraining, and it is much easier to combine several samples. Some capabilities are demonstrated using ATLAS data.
Head pose estimation algorithm based on deep learning
Cao, Yuanming; Liu, Yijun
2017-05-01
Head pose estimation has been widely used in the field of artificial intelligence, pattern recognition and intelligent human-computer interaction and so on. Good head pose estimation algorithm should deal with light, noise, identity, shelter and other factors robustly, but so far how to improve the accuracy and robustness of attitude estimation remains a major challenge in the field of computer vision. A method based on deep learning for pose estimation is presented. Deep learning with a strong learning ability, it can extract high-level image features of the input image by through a series of non-linear operation, then classifying the input image using the extracted feature. Such characteristics have greater differences in pose, while they are robust of light, identity, occlusion and other factors. The proposed head pose estimation is evaluated on the CAS-PEAL data set. Experimental results show that this method is effective to improve the accuracy of pose estimation.
Fast LCMV-based Methods for Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll
2013-01-01
peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...
Moddemeijer, R
In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a
Process-based Cost Estimation for Ramjet/Scramjet Engines
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Chen, Tingting; Hedman, Lea; Mattila, Petri S.; Jartti, Laura; Jartti, Tuomas; Ruuskanen, Olli; Söderlund-Venermo, Maria; Hedman, Klaus
2012-01-01
Biotin is an essential vitamin that binds streptavidin or avidin with high affinity and specificity. As biotin is a small molecule that can be linked to proteins without affecting their biological activity, biotinylation is applied widely in biochemical assays. In our laboratory, IgM enzyme immuno assays (EIAs) of µ-capture format have been set up against many viruses, using as antigen biotinylated virus like particles (VLPs) detected by horseradish peroxidase-conjugated streptavidin. We recently encountered one serum sample reacting with the biotinylated VLP but not with the unbiotinylated one, suggesting in human sera the occurrence of biotin-reactive antibodies. In the present study, we search the general population (612 serum samples from adults and 678 from children) for IgM antibodies reactive with biotin and develop an indirect EIA for quantification of their levels and assessment of their seroprevalence. These IgM antibodies were present in 3% adults regardless of age, but were rarely found in children. The adverse effects of the biotin IgM on biotinylation-based immunoassays were assessed, including four inhouse and one commercial virus IgM EIAs, showing that biotin IgM do cause false positivities. The biotin can not bind IgM and streptavidin or avidin simultaneously, suggesting that these biotin-interactive compounds compete for the common binding site. In competitive inhibition assays, the affinities of biotin IgM antibodies ranged from 2.1×10−3 to 1.7×10−4 mol/L. This is the first report on biotin antibodies found in humans, providing new information on biotinylation-based immunoassays as well as new insights into the biomedical effects of vitamins. PMID:22879954
Keipert, Peter E
2017-01-01
Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.
Scatterer Number Density Considerations in Reference Phantom Based Attenuation Estimation
Rubert, Nicholas; Varghese, Tomy
2014-01-01
Attenuation estimation and imaging has the potential to be a valuable tool for tissue characterization, particularly for indicating the extent of thermal ablation therapy in the liver. Often the performance of attenuation estimation algorithms is characterized with numerical simulations or tissue mimicking phantoms containing a high scatterer number density (SND). This ensures an ultrasound signal with a Rayleigh distributed envelope and an SNR approaching 1.91. However, biological tissue often fails to exhibit Rayleigh scattering statistics. For example, across 1,647 ROI's in 5 ex vivo bovine livers we find an envelope SNR of 1.10 ± 0.12 when imaged with the VFX 9L4 linear array transducer at a center frequency of 6.0 MHz on a Siemens S2000 scanner. In this article we examine attenuation estimation in numerical phantoms, TM phantoms with variable SND's, and ex vivo bovine liver prior to and following thermal coagulation. We find that reference phantom based attenuation estimation is robust to small deviations from Rayleigh statistics. However, in tissue with low SND, large deviations in envelope SNR from 1.91 lead to subsequently large increases in attenuation estimation variance. At the same time, low SND is not found to be a significant source of bias in the attenuation estimate. For example, we find the standard deviation of attenuation slope estimates increases from 0.07 dB/cm MHz to 0.25 dB/cm MHz as the envelope SNR decreases from 1.78 to 1.01 when estimating attenuation slope in TM phantoms with a large estimation kernel size (16 mm axially by 15 mm laterally). Meanwhile, the bias in the attenuation slope estimates is found to be negligible (phantom based attenuation estimates in ex vivo bovine liver and thermally coagulated bovine liver. PMID:24726800
Naive Probability: Model-based Estimates of Unique Events
2014-05-04
JPD with a coarse scale than with a fine scale. Monte Carlo simulations bear out this phenomenon, and a previous study corroborated this prediction...between their estimates of P(A) and P(B) on at least 50% of trials (Binomial test, p < .0005). They also bear out the restricted nature of system 1...Glucksberg, Adele Goldberg, Tony Harrison, Laura Hiatt, Olivia Kang, Philipp Koralus, Ed Lawson, Dan Osherson, Janani Prabhakar, Marco Ragni
Vehicle Sideslip Angle Estimation Based on General Regression Neural Network
Directory of Open Access Journals (Sweden)
Wang Wei
2016-01-01
Full Text Available Aiming at the accuracy of estimation of vehicle’s mass center sideslip angle, an estimation method of slip angle based on general regression neural network (GRNN and driver-vehicle closed-loop system has been proposed: regarding vehicle’s sideslip angle as time series mapping of yaw speed and lateral acceleration; using homogeneous design project to optimize the training samples; building the mapping relationship among sideslip angle, yaw speed, and lateral acceleration; at the same time, using experimental method to measure vehicle’s sideslip angle to verify validity of this method. Estimation results of neural network and real vehicle experiment show the same changing tendency. The mean of error is within 10% of test result’s amplitude. Results show GRNN can estimate vehicle’s sideslip angle correctly. It can offer a reference to the application of vehicle’s stability control system on vehicle’s state estimation.
Response-Based Estimation of Sea State Parameters
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...
Estimating genetic correlations based on phenotypic data: a ...
Indian Academy of Sciences (India)
effects. [Zintzaras E. 2011 Estimating genetic correlations based on phenotypic data: a simulation-based method. J. Genet. 90, 51–58]. Introduction. The evolutionary ... sibs); and (iii) shared environmental effects are absent, equa- tion (1) can ..... the averages of WL and WW in each fly following Becker. (1984). Table 1 gives ...
Estimating genetic correlations based on phenotypic data: a ...
Indian Academy of Sciences (India)
effects. [Zintzaras E. 2011 Estimating genetic correlations based on phenotypic data: a simulation-based method. J. Genet. 90, 51–58]. Introduction. The evolutionary ... Also affiliated to Institute for Clinical. Research and Health Policy Studies, Tufts Medical Center, Tufts University. School of Medicine, 800 Washington Street, ...
Accurate position estimation methods based on electrical impedance tomography measurements
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Accurate position estimation methods based on electrical impedance tomography measurements
International Nuclear Information System (INIS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T A
2017-01-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less
Vehicle lateral state estimation based on measured tyre forces.
Tuononen, Ari J
2009-01-01
Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements.
Vehicle Lateral State Estimation Based on Measured Tyre Forces
Directory of Open Access Journals (Sweden)
Ari J. Tuononen
2009-10-01
Full Text Available Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements.
Estimation of Compaction Parameters Based on Soil Classification
Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.
2018-02-01
Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.
Estimating evaporative vapor generation from automobiles based on parking activities
International Nuclear Information System (INIS)
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S.
2015-01-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade–Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5–8% less than calculation without considering parking activity. - Highlights: • We applied real parking distribution data to estimate evaporative vapor generation. • We applied real hourly temperature data to estimate hourly incremental vapor generation rate. • Evaporative emission for Florence is estimated based on parking distribution and hourly rate. - A new approach is proposed to quantify the weighted evaporative vapor generation based on parking distribution with an hourly incremental vapor generation rate
A Kalman-based Fundamental Frequency Estimation Algorithm
DEFF Research Database (Denmark)
Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom
2017-01-01
Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this paper...... model and formulated as a compact nonlinear matrix form, which is further used to derive an extended Kalman filter. Detailed and continuous fundamental frequency and ampli- tude estimates for speech, the sustained vowel /a/ and solo musical tones with vibrato are demonstrated....
Time of arrival based location estimation for cooperative relay networks
Çelebi, Hasari Burak
2010-09-01
In this paper, we investigate the performance of a cooperative relay network performing location estimation through time of arrival (TOA). We derive Cramer-Rao lower bound (CRLB) for the location estimates using the relay network. The analysis is extended to obtain average CRLB considering the signal fluctuations in both relay and direct links. The effects of the channel fading of both relay and direct links and amplification factor and location of the relay node on average CRLB are investigated. Simulation results show that the channel fading of both relay and direct links and amplification factor and location of relay node affect the accuracy of TOA based location estimation. ©2010 IEEE.
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles
Directory of Open Access Journals (Sweden)
Camilo Cortés
2016-01-01
Full Text Available In Robot-Assisted Rehabilitation (RAR the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs (e.g., optical and electromagnetic to estimate the Glenohumeral (GH joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles.
Cortés, Camilo; Unzueta, Luis; de Los Reyes-Guzmán, Ana; Ruiz, Oscar E; Flórez, Julián
2016-01-01
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
Directory of Open Access Journals (Sweden)
Sekhar S Chandra
2004-01-01
Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.
Estimation of Thermal Sensation Based on Wrist Skin Temperatures
Sim, Soo Young; Koh, Myung Jun; Joo, Kwang Min; Noh, Seungwoo; Park, Sangyun; Kim, Youn Ho; Park, Kwang Suk
2016-01-01
Thermal comfort is an essential environmental factor related to quality of life and work effectiveness. We assessed the feasibility of wrist skin temperature monitoring for estimating subjective thermal sensation. We invented a wrist band that simultaneously monitors skin temperatures from the wrist (i.e., the radial artery and ulnar artery regions, and upper wrist) and the fingertip. Skin temperatures from eight healthy subjects were acquired while thermal sensation varied. To develop a thermal sensation estimation model, the mean skin temperature, temperature gradient, time differential of the temperatures, and average power of frequency band were calculated. A thermal sensation estimation model using temperatures of the fingertip and wrist showed the highest accuracy (mean root mean square error [RMSE]: 1.26 ± 0.31). An estimation model based on the three wrist skin temperatures showed a slightly better result to the model that used a single fingertip skin temperature (mean RMSE: 1.39 ± 0.18). When a personalized thermal sensation estimation model based on three wrist skin temperatures was used, the mean RMSE was 1.06 ± 0.29, and the correlation coefficient was 0.89. Thermal sensation estimation technology based on wrist skin temperatures, and combined with wearable devices may facilitate intelligent control of one’s thermal environment. PMID:27023538
A method for estimating fetal weight based on body composition.
Bo, Chen; Jie, Yu; Xiu-E, Gao; Gui-Chuan, Fan; Wen-Long, Zhang
2018-04-02
Fetal weight is an important factor to determine the delivery mode of pregnant women. The change of fetal weight is significant, according to regular health monitoring of pregnant women. Conventional methods of fetal weight estimation, namely those based on B-ultrasound, are very complicated and the costs are high. In this paper, we propose a new method based on body composition. An abdominal four-segment impedance model is first established upon pregnant women, as well as the method of calculation. A body composition based method is then given to estimate the fetal weight, with the solution given explicitly. Analyses of clinical data reveal the smallness of the error between the estimated value and the actual value. The error between B-ultrasound and the present method is less than 15%.
A Channelization-Based DOA Estimation Method for Wideband Signals
Directory of Open Access Journals (Sweden)
Rui Guo
2016-07-01
Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun
2017-01-01
Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:
A New Missing Values Estimation Algorithm in Wireless Sensor Networks Based on Convolution
Directory of Open Access Journals (Sweden)
Feng Liu
2013-04-01
Full Text Available Nowadays, with the rapid development of Internet of Things (IoT applications, data missing phenomenon becomes very common in wireless sensor networks. This problem can greatly and directly threaten the stability and usability of the Internet of things applications which are constructed based on wireless sensor networks. How to estimate the missing value has attracted wide interest, and some solutions have been proposed. Different with the previous works, in this paper, we proposed a new convolution based missing value estimation algorithm. The convolution theory, which is usually used in the area of signal and image processing, can also be a practical and efficient way to estimate the missing sensor data. The results show that the proposed algorithm in this paper is practical and effective, and can estimate the missing value accurately.
DEFF Research Database (Denmark)
Jimenez Mena, Belen; Verrier, Etienne; Hospital, Frederic
We performed a simulation study of several estimators of the effective population size (Ne): NeH = estimator based on the rate of decrease in heterozygosity; NeT = estimator based on the temporal method; NeLD = linkage disequilibrium-based method. We first focused on NeH, which presented...... under scenarios of 3 and 20 bi-allelic loci. Increasing the number of loci largely improved the performance of NeT and NeLD. We highlight the value of NeT and NeLD when large numbers of bi-allelic loci are available, which is nowadays the case for SNPs markers....... an increase in the variability of values over time. The distance from the mean and the median to the true Ne increased over time too. This was caused by the fixation of alleles through time due to genetic drift and the changes in the distribution of allele frequencies. We compared the three estimators of Ne...
HydrogeoEstimatorXL: an Excel-based tool for estimating hydraulic gradient magnitude and direction
Devlin, J. F.; Schillig, P. C.
2017-05-01
HydrogeoEstimatorXL is a free software tool for the interpretation of flow systems based on spatial hydrogeological field data from multi-well networks. It runs on the familiar Excel spreadsheet platform. The program accepts well location coordinates and hydraulic head data, and returns an analysis of the area flow system in two dimensions based on (1) a single best fit plane of the potentiometric surface and (2) three-point estimators, i.e., well triplets assumed to bound planar sections of the potentiometric surface. The software produces graphical outputs including histograms of hydraulic gradient magnitude and direction, groundwater velocity (based on a site average hydraulic properties), as well as mapped renditions of the estimator triangles and the velocity vectors associated with them. Within the software, a transect can be defined and the mass discharge of a groundwater contaminant crossing the transect can be estimated. This kind of analysis is helpful in gaining an overview of a site's hydrogeology, for problem definition, and as a review tool to check the reasonableness of other independent calculations.
Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM
Sheng, Hanlin; Zhang, Tianhong
2017-08-01
In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.
Power system dynamic state estimation using prediction based evolutionary technique
International Nuclear Information System (INIS)
Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan
2016-01-01
In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.
Klatser, P. R.; de Wit, M. Y.; Fajardo, T. T.; Cellona, R. V.; Abalos, R. M.; de la Cruz, E. C.; Madarang, M. G.; Hirsch, D. S.; Douglas, J. T.
1989-01-01
Thirty-five previously untreated lepromatous patients receiving dapsone-based therapy were monitored throughout their 5-year period of treatment by serology and by pathology. Sequentially collected sera were used to evaluate the usefulness of four Mycobacterium leprae antigens as used in ELISA to
Deformation-Based Atrophy Estimation for Alzheimer’s Disease
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru
Alzheimer’s disease (AD) - the most common form of dementia, is a term used for accelerated memory loss and cognitive abilities enough to severely hamper day-to-day activities. One of the most globally accepted markers for AD is atrophy, in mainly the brain parenchyma. The goal of the PhD project...... is to develop a deformation-based atrophy estimation pipeline that is reliable and stable. We investigate the application of image registration using freeform deformation and stationary velocity fields in atrophy estimation. In the process, we propose a new multi-scale model for the transformation model...... and a new way to estimate atrophy from a deformation field. We demonstrate the performance of the proposed solution but applying it on the publicly available Alzheimer’s disease neuroimaging data (ADNI) initiative and compare to existing state-of-art atrophy estimation methods....
Dynamic Mode Decomposition based on Kalman Filter for Parameter Estimation
Shibata, Hisaichi; Nonomura, Taku; Takaki, Ryoji
2017-11-01
With the development of computational fluid dynamics, large-scale data can now be obtained. In order to model physical phenomena from such data, it is required to extract features of flow field. Dynamic mode decomposition (DMD) is a method which meets the request. DMD can compute dominant eigenmodes of flow field by approximating system matrix. From this point of view, DMD can be considered as parameter estimation of system matrix. To estimate such parameters, we propose a novel method based on Kalman filter. Our numerical experiments indicated that the proposed method can estimate the parameters more accurately if it is compared with standard DMD methods. With this method, it is also possible to improve the parameter estimation accuracy if characteristics of noise acting on the system is given.
Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration
Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.
2017-04-01
Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn
A Web-Based System for Bayesian Benchmark Dose Estimation.
Shao, Kan; Shapiro, Andrew J
2018-01-11
Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.
Bases for the Creation of Electric Energy Price Estimate Model
International Nuclear Information System (INIS)
Toljan, I.; Klepo, M.
1995-01-01
The paper presents the basic principles for the creation and introduction of a new model for the electric energy price estimate and its significant influence on the tariff system functioning. There is also a review of the model used presently for the electric energy price estimate which is based on the model of objectivized values of electric energy plants and production, transmission and distribution facilities, followed by proposed changes which would result in functional and organizational improvements within the electric energy system as the most complex subsystem of the whole power system. The model is based on substantial and functional connection of the optimization and analysis system with the electric energy economic dispatching, including marginal cost estimate and their influence on the tariff system as the main means in achieving better electric energy system's functioning quality. (author). 10 refs., 2 figs
Soil Erosion Estimation Using Grid-based Computation
Directory of Open Access Journals (Sweden)
Josef Vlasák
2005-06-01
Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research
The observer-based synchronization and parameter estimation of a ...
Indian Academy of Sciences (India)
Haipeng Su
2017-10-31
Oct 31, 2017 ... Chaotic system; observer-based synchronization; parameter estimation; single output. PACS No. 05.45.Gg. 1. Introduction. Chaos is a widespread phenomenon occurring in many nonlinear systems, such as communication system, meteorological system etc. Since Pecora and Carroll. [1] developed a ...
Estimate of Water Residence Times in Tudor Creek, Kenya Based ...
African Journals Online (AJOL)
Runoff in general was also too small to give reliable rating curves (correlation between rainfall and river runoff). For this reason, heat conservation was used for the calculation of water exchange. Although estimates of sea surface heat fluxes were based on coarse global climatology data with large seasonal variations in the ...
Islanding detection scheme based on adaptive identifier signal estimation method.
Bakhshi, M; Noroozian, R; Gharehpetian, G B
2017-11-01
This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ekman estimates of upwelling at cape columbine based on ...
African Journals Online (AJOL)
Ekman estimates of upwelling at cape columbine based on measurements of longshore wind from a 35-year time-series. AS Johnson, G Nelson. Abstract. Cape Columbine is a prominent headland on the south-west coast of Africa at approximately 32°50´S, where there is a substantial upwelling tongue, enhancing the ...
An Approach to Quality Estimation in Model-Based Development
DEFF Research Database (Denmark)
Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter
2004-01-01
We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...
Estimating genetic correlations based on phenotypic data: a ...
Indian Academy of Sciences (India)
Knowledge of genetic correlations is essential to understand the joint evolution of traits through correlated responses to selection, a difficult and seldom, very precise task even with easy-to-breed species. Here, a simulation-based method to estimate genetic correlations and genetic covariances that relies only on ...
Estimating spacecraft attitude based on in-orbit sensor measurements
DEFF Research Database (Denmark)
Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias
2014-01-01
filter (EKF) is used for quaternion-based attitude estimation. A Simulink simulation environment developed for AAUSAT3, containing a "truth model" of the satellite and the orbit environment, is used to test the performance The performance is tested using different sensor noise parameters obtained both...
parameter extraction and estimation based on the pv panel outdoor
African Journals Online (AJOL)
userpc
PV panel under varying weather conditions to estimate the PV parameters. Outdoor performance of the PV module (AP-PM-15) was carried out for several times. The .... Performance. Analysis of Different Photovoltaic. Technologies Based on MATLAB. Simulation. In Northwest University. Science, Faculty of Science Annual.
Fatigue Damage Estimation and Data-based Control for Wind Turbines
DEFF Research Database (Denmark)
Barradas Berglind, Jose de Jesus; Wisniewski, Rafal; Soltani, Mohsen
2015-01-01
The focus of this work is on fatigue estimation and data-based controller design for wind turbines. The main purpose is to include a model of the fatigue damage of the wind turbine components in the controller design and synthesis process. This study addresses an online fatigue estimation method...... based on hysteresis operators, which can be used in control loops. The authors propose a data-based model predictive control (MPC) strategy that incorporates an online fatigue estimation method through the objective function, where the ultimate goal in mind is to reduce the fatigue damage of the wind...... turbine components. The outcome is an adaptive or self-tuning MPC strategy for wind turbine fatigue damage reduction, which relies on parameter identification on previous measurement data. The results of the proposed strategy are compared with a baseline model predictive controller....
Robustifying Correspondence Based 6D Object Pose Estimation
DEFF Research Database (Denmark)
Hietanen, Antti; Halme, Jussi; Buch, Anders Glent
2017-01-01
We propose two methods to robustify point correspondence based 6D object pose estimation. The first method, curvature filtering, is based on the assumption that low curvature regions provide false matches, and removing points in these regions improves robustness. The second method, region pruning....... For the experiments, we evaluated three correspondence selection methods, Geometric Consistency (GC) [1], Hough Grouping (HG) [2] and Search of Inliers (SI) [3] and report systematic improvements for their robustified versions with two distinct datasets....
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Directory of Open Access Journals (Sweden)
Luis F López-Cortés
Full Text Available Significant controversy still exists about ritonavir-boosted protease inhibitor monotherapy (mtPI/rtv as a simplification strategy that is used up to now to treat patients that have not experienced previous virological failure (VF while on protease inhibitor (PI -based regimens. We have evaluated the effectiveness of two mtPI/rtv regimens in an actual clinical practice setting, including patients that had experienced previous VF with PI-based regimens.This retrospective study analyzed 1060 HIV-infected patients with undetectable viremia that were switched to lopinavir/ritonavir or darunavir/ritonavir monotherapy. In cases in which the patient had previously experienced VF while on a PI-based regimen, the lack of major HIV protease resistance mutations to lopinavir or darunavir, respectively, was mandatory. The primary endpoint of this study was the percentage of participants with virological suppression after 96 weeks according to intention-to-treat analysis (non-complete/missing = failure.A total of 1060 patients were analyzed, including 205 with previous VF while on PI-based regimens, 90 of whom were on complex therapies due to extensive resistance. The rates of treatment effectiveness (intention-to-treat analysis and virological efficacy (on-treatment analysis at week 96 were 79.3% (CI95, 76.8-81.8 and 91.5% (CI95, 89.6-93.4, respectively. No relationships were found between VF and earlier VF while on PI-based regimens, the presence of major or minor protease resistance mutations, the previous time on viral suppression, CD4+ T-cell nadir, and HCV-coinfection. Genotypic resistance tests were available in 49 out of the 74 patients with VFs and only four patients presented new major protease resistance mutations.Switching to mtPI/rtv achieves sustained virological control in most patients, even in those with previous VF on PI-based regimens as long as no major resistance mutations are present for the administered drug.
ACCES: Offline Accuracy Estimation for Fingerprint-Based Localization
DEFF Research Database (Denmark)
Nikitin, Artyom; Laoudias, Christos; Chatzimilioudis, Georgios
2017-01-01
In this demonstration we present ACCES, a novel framework that enables quality assessment of arbitrary fingerprint maps and offline accuracy estimation for the task of fingerprint-based indoor localization. Our framework considers collected fingerprints disregarding the physical origin of the data....... First, it applies a widely used statistical instrument, namely Gaussian Process Regression (GPR), for interpolation of the fingerprints. Then, to estimate the best possibly achievable localization accuracy at any location, it utilizes the Cramer-Rao Lower Bound (CRLB) with interpolated data as an input...
Artificial Neural Network Based State Estimators Integrated into Kalmtool
DEFF Research Database (Denmark)
Bayramoglu, Enis; Ravn, Ole; Poulsen, Niels Kjølstad
2012-01-01
In this paper we present a toolbox enabling easy evaluation and comparison of dierent ltering algorithms. The toolbox is called Kalmtool and is a set of MATLAB tools for state estimation of nonlinear systems. The toolbox now contains functions for Articial Neural Network Based State Estimation...... as well as for DD1 lter and the DD2 lter, as well as functions for Unscented Kalman lters and several versions of particle lters. The toolbox requires MATLAB version 7, but no additional toolboxes are required....
Fault Severity Estimation of Rotating Machinery Based on Residual Signals
Directory of Open Access Journals (Sweden)
Fan Jiang
2012-01-01
Full Text Available Fault severity estimation is an important part of a condition-based maintenance system, which can monitor the performance of an operation machine and enhance its level of safety. In this paper, a novel method based on statistical property and residual signals is developed for estimating the fault severity of rotating machinery. The fast Fourier transformation (FFT is applied to extract the so-called multifrequency-band energy (MFBE from the vibration signals of rotating machinery with different fault severity levels in the first stage. Usually these features of the working conditions with different fault sensitivities are different. Therefore a sensitive features-selecting algorithm is defined to construct the feature matrix and calculate the statistic parameter (mean in the second stage. In the last stage, the residual signals computed by the zero space vector are used to estimate the fault severity. Simulation and experimental results reveal that the proposed method based on statistics and residual signals is effective and feasible for estimating the severity of a rotating machine fault.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
Ensemble-based observation impact estimates using the NCEP GFS
Directory of Open Access Journals (Sweden)
Yoichiro Ota
2013-09-01
Full Text Available The impacts of the assimilated observations on the 24-hour forecasts are estimated with the ensemble-based method proposed by Kalnay et al. using an ensemble Kalman filter (EnKF. This method estimates the relative impact of observations in data assimilation similar to the adjoint-based method proposed by Langland and Baker but without using the adjoint model. It is implemented on the National Centers for Environmental Prediction Global Forecasting System EnKF that has been used as part of operational global data assimilation system at NCEP since May 2012. The result quantifies the overall positive impacts of the assimilated observations and the relative importance of the satellite radiance observations compared to other types of observations, especially for the moisture fields. A simple moving localisation based on the average wind, although not optimal, seems to work well. The method is also used to identify the cause of local forecast failure cases in the 24-hour forecasts. Data-denial experiments of the observations identified as producing a negative impact are performed, and forecast errors are reduced as estimated, thus validating the impact estimation.
Estimating evaporative vapor generation from automobiles based on parking activities.
Dong, Xinyi; Tschantz, Michael; Fu, Joshua S
2015-07-01
A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of base of middle phalanx size using anatomical landmarks.
Janssen, Stein J; Teunis, Teun; ter Meulen, Dirk P; Hageman, Michiel G J S; Ring, David
2014-08-01
To determine whether there is a measurable and reproducible relationship between the articular surface size of the middle phalanx base and the size of the middle phalanx head and proximal phalanx length of the same finger. Size of the articular surface of the middle phalanx base, size of the middle phalanx head, and proximal phalanx length were measured in 84 lateral radiographs by 3 observers. The ratio of articular surface size of the middle phalanx base to the proximal phalanx length of the same finger was 0.17. The ratio of articular surface size of the middle phalanx base to the size of the middle phalanx head of the same finger was 1.34. The intraclass correlation (ICC) among 3 raters was 0.99 for proximal phalanx length and 0.88 for size of the middle phalanx head. Knowledge of this relationship and ratios allow for accurate estimation of the percentage of articular surface involvement in a fracture of the middle phalanx base. The ICC was highest for measuring proximal phalanx length, making it the most reliable measurement for estimation of the articular surface size. This quantitative estimate may be useful for clinical research and is applicable to patient care. Copyright © 2014 American Society for Surgery of the Hand. All rights reserved.
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders
is linked to physiology more directly than is age, and can be measured easier with less cost. In this work we used a single-species size-based model to estimate the fishing mortality (F) and the status of the stock, quantified by the ratio F/Fmsy between actual fishing mortality and the fishing mortality...
New, national bottom-up estimate for tree-based biological ...
Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating
Template-Based Estimation of Time-Varying Tempo
Directory of Open Access Journals (Sweden)
Peeters Geoffroy
2007-01-01
Full Text Available We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a proposed reassigned spectral energy flux for the detection of musical events. The dominant periodicities of this flux are estimated by a proposed combination of discrete Fourier transform and frequency-mapped autocorrelation function. The most likely meter, beat, and tatum over time are then estimated jointly using proposed meter/beat subdivision templates and a Viterbi decoding algorithm. The performances of our system have been evaluated on four different test sets among which three were used during the ISMIR 2004 tempo induction contest. The performances obtained are close to the best results of this contest.
Event-based state estimation a stochastic perspective
Shi, Dawei; Chen, Tongwen
2016-01-01
This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...
Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations
Kopka, Ryszard
2017-12-01
In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.
An RSS based location estimation technique for cognitive relay networks
Qaraqe, Khalid A.
2010-11-01
In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.
Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.
Kopka, Ryszard
2017-12-22
In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
Marker-based estimation of heritability in immortal populations.
Kruijer, Willem; Boer, Martin P; Malosetti, Marcos; Flood, Pádraic J; Engel, Bas; Kooke, Rik; Keurentjes, Joost J B; van Eeuwijk, Fred A
2015-02-01
Heritability is a central parameter in quantitative genetics, from both an evolutionary and a breeding perspective. For plant traits heritability is traditionally estimated by comparing within- and between-genotype variability. This approach estimates broad-sense heritability and does not account for different genetic relatedness. With the availability of high-density markers there is growing interest in marker-based estimates of narrow-sense heritability, using mixed models in which genetic relatedness is estimated from genetic markers. Such estimates have received much attention in human genetics but are rarely reported for plant traits. A major obstacle is that current methodology and software assume a single phenotypic value per genotype, hence requiring genotypic means. An alternative that we propose here is to use mixed models at the individual plant or plot level. Using statistical arguments, simulations, and real data we investigate the feasibility of both approaches and how these affect genomic prediction with the best linear unbiased predictor and genome-wide association studies. Heritability estimates obtained from genotypic means had very large standard errors and were sometimes biologically unrealistic. Mixed models at the individual plant or plot level produced more realistic estimates, and for simulated traits standard errors were up to 13 times smaller. Genomic prediction was also improved by using these mixed models, with up to a 49% increase in accuracy. For genome-wide association studies on simulated traits, the use of individual plant data gave almost no increase in power. The new methodology is applicable to any complex trait where multiple replicates of individual genotypes can be scored. This includes important agronomic crops, as well as bacteria and fungi. Copyright © 2015 by the Genetics Society of America.
Tyre pressure monitoring using a dynamical model-based estimator
Reina, Giulio; Gentile, Angelo; Messina, Arcangelo
2015-04-01
In the last few years, various control systems have been investigated in the automotive field with the aim of increasing the level of safety and stability, avoid roll-over, and customise handling characteristics. One critical issue connected with their integration is the lack of state and parameter information. As an example, vehicle handling depends to a large extent on tyre inflation pressure. When inflation pressure drops, handling and comfort performance generally deteriorate. In addition, it results in an increase in fuel consumption and in a decrease in lifetime. Therefore, it is important to keep tyres within the normal inflation pressure range. This paper introduces a model-based approach to estimate online tyre inflation pressure. First, basic vertical dynamic modelling of the vehicle is discussed. Then, a parameter estimation framework for dynamic analysis is presented. Several important vehicle parameters including tyre inflation pressure can be estimated using the estimated states. This method aims to work during normal driving using information from standard sensors only. On the one hand, the driver is informed about the inflation pressure and he is warned for sudden changes. On the other hand, accurate estimation of the vehicle states is available as possible input to onboard control systems.
Uav-Based Automatic Tree Growth Measurement for Biomass Estimation
Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.
2016-06-01
Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.
Extrapolated HPGe efficiency estimates based on a single calibration measurement
International Nuclear Information System (INIS)
Winn, W.G.
1994-01-01
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of
Neuromorphic Event-Based 3D Pose Estimation
Directory of Open Access Journals (Sweden)
David eReverter Valeiras
2016-01-01
Full Text Available Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30-60Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1textmu s of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion.
PREVIOUS SECOND TRIMESTER ABORTION
African Journals Online (AJOL)
PNLC
PREVIOUS SECOND TRIMESTER ABORTION: A risk factor for third trimester uterine rupture in three ... for accurate diagnosis of uterine rupture. KEY WORDS: Induced second trimester abortion - Previous uterine surgery - Uterine rupture. ..... scarred uterus during second trimester misoprostol- induced labour for a missed ...
Correction of Misclassifications Using a Proximity-Based Estimation Method
Directory of Open Access Journals (Sweden)
Shmulevich Ilya
2004-01-01
Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Observer Based Fault Detection and Moisture Estimating in Coal Mill
DEFF Research Database (Denmark)
Odgaard, Peter Fogh; Mataji, Babak
2008-01-01
In this paper an observer-based method for detecting faults and estimating moisture content in the coal in coal mills is presented. Handling of faults and operation under special conditions, such as high moisture content in the coal, are of growing importance due to the increasing...... requirements to the general performance of power plants. Detection of faults and moisture content estimation are consequently of high interest in the handling of the problems caused by faults and moisture content. The coal flow out of the mill is the obvious variable to monitor, when detecting non-intended drops in the coal...... flow out of the coal mill. However, this variable is not measurable. Another estimated variable is the moisture content, which is only "measurable" during steady-state operations of the coal mill. Instead, this paper suggests a method where these unknown variables are estimated based on a simple energy...
A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine
Guo, T. H.; Musgrave, J.
1992-01-01
In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using
A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine
Guo, T. H.; Musgrave, J.
1992-11-01
In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using
Estimating the Capacity of the Location-Based Advertising Channel
DEFF Research Database (Denmark)
Gidofalvi, Gyozo; Larsen, Hans Ravnkjær; Pedersen, Torben Bach
2008-01-01
Delivering "relevant" advertisements to consumers carrying mobile devices is regarded by many as one of the most promising mobile business opportunities. The relevance of a mobile ad depends on at least two factors: (1) the proximity of the mobile consumer to the product or service being advertised...... advertising channel, i.e., the number of relevant ads that can be delivered to mobile consumers. The estimations are based on a simulated mobile consumer population and simulated mobile ads. Both of the simulated data sets are realistic and derived based on real world data sources about population geo......, and (2) the match between the product or service and the interest of the mobile consumer. The interest of the mobile consumer can be either explicit (expressed by the mobile consumer) or implicit (inferred from user characteristics). This paper tries to empirically estimate the capacity of the mobile...
Bayesian Estimation-Based Pedestrian Tracking in Microcells
Directory of Open Access Journals (Sweden)
Yoshiaki Taniguchi
2013-01-01
Full Text Available We consider a pedestrian tracking system where sensor nodes are placed only at specific points so that the monitoring region is divided into multiple smaller regions referred to as microcells. In the proposed pedestrian tracking system, sensor nodes composed of pairs of binary sensors can detect pedestrian arrival and departure events. In this paper, we focus on pedestrian tracking in microcells. First, we investigate actual pedestrian trajectories in a microcell on the basis of observations using video sequences, after which we prepare a pedestrian mobility model. Next, we propose a method for pedestrian tracking in microcells based on the developed pedestrian mobility model. In the proposed method, we extend the Bayesian estimation to account for time-series information to estimate the correspondence between pedestrian arrival and departure events. Through simulations, we show that the tracking success ratio of the proposed method is increased by 35.8% compared to a combinatorial optimization-based tracking method.
Vehicle Sideslip Angle Estimation Based on Hybrid Kalman Filter
Directory of Open Access Journals (Sweden)
Jing Li
2016-01-01
Full Text Available Vehicle sideslip angle is essential for active safety control systems. This paper presents a new hybrid Kalman filter to estimate vehicle sideslip angle based on the 3-DoF nonlinear vehicle dynamic model combined with Magic Formula tire model. The hybrid Kalman filter is realized by combining square-root cubature Kalman filter (SCKF, which has quick convergence and numerical stability, with square-root cubature based receding horizon Kalman FIR filter (SCRHKF, which has robustness against model uncertainty and temporary noise. Moreover, SCKF and SCRHKF work in parallel, and the estimation outputs of two filters are merged by interacting multiple model (IMM approach. Experimental results show the accuracy and robustness of the hybrid Kalman filter.
Adaptive algorithm for mobile user positioning based on environment estimation
Directory of Open Access Journals (Sweden)
Grujović Darko
2014-01-01
Full Text Available This paper analyzes the challenges to realize an infrastructure independent and a low-cost positioning method in cellular networks based on RSS (Received Signal Strength parameter, auxiliary timing parameter and environment estimation. The proposed algorithm has been evaluated using field measurements collected from GSM (Global System for Mobile Communications network, but it is technology independent and can be applied in UMTS (Universal Mobile Telecommunication Systems and LTE (Long-Term Evolution networks, also.
Comparison of physically based catchment models for estimating Phosphorus losses
Nasr, Ahmed Elssidig; Bruen, Michael
2003-01-01
As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
A novel rules based approach for estimating software birthmark.
Nazir, Shah; Shahzad, Sara; Khan, Sher Afzal; Alias, Norma Binti; Anwar, Sajid
2015-01-01
Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.
A Novel Rules Based Approach for Estimating Software Birthmark
Directory of Open Access Journals (Sweden)
Shah Nazir
2015-01-01
Full Text Available Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.
Estimating Evapotranspiration Using an Observation Based Terrestrial Water Budget
Rodell, Matthew; McWilliams, Eric B.; Famiglietti, James S.; Beaudoing, Hiroko K.; Nigro, Joseph
2011-01-01
Evapotranspiration (ET) is difficult to measure at the scales of climate models and climate variability. While satellite retrieval algorithms do exist, their accuracy is limited by the sparseness of in situ observations available for calibration and validation, which themselves may be unrepresentative of 500m and larger scale satellite footprints and grid pixels. Here, we use a combination of satellite and ground-based observations to close the water budgets of seven continental scale river basins (Mackenzie, Fraser, Nelson, Mississippi, Tocantins, Danube, and Ubangi), estimating mean ET as a residual. For any river basin, ET must equal total precipitation minus net runoff minus the change in total terrestrial water storage (TWS), in order for mass to be conserved. We make use of precipitation from two global observation-based products, archived runoff data, and TWS changes from the Gravity Recovery and Climate Experiment satellite mission. We demonstrate that while uncertainty in the water budget-based estimates of monthly ET is often too large for those estimates to be useful, the uncertainty in the mean annual cycle is small enough that it is practical for evaluating other ET products. Here, we evaluate five land surface model simulations, two operational atmospheric analyses, and a recent global reanalysis product based on our results. An important outcome is that the water budget-based ET time series in two tropical river basins, one in Brazil and the other in central Africa, exhibit a weak annual cycle, which may help to resolve debate about the strength of the annual cycle of ET in such regions and how ET is constrained throughout the year. The methods described will be useful for water and energy budget studies, weather and climate model assessments, and satellite-based ET retrieval optimization.
Upper Bound Performance Estimation for Copper Based Broadband Access
DEFF Research Database (Denmark)
Jensen, Michael; Gutierrez Lopez, Jose Manuel
2012-01-01
of copper based access connections at a household level by using Geographical Information System data. This can be combined with different configurations of DSLAMs distributions, in order to calculate the required number of active equipment points to guarantee certain QoS levels. This method can be used...... to define the limitations of copper based broadband access. A case study in a municipality in Denmark shows how the estimated network dimension to be able to provide video conference services to the majority of the population might be too high to be implemented in reality....
A novel ULA-based geometry for improving AOA estimation
Directory of Open Access Journals (Sweden)
Akbari Farida
2011-01-01
Full Text Available Abstract Due to relatively simple implementation, Uniform Linear Array (ULA is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE of the detected angles descends as the input Signal to Noise Ratio (SNR increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.
Estimation of Sideslip Angle Based on Extended Kalman Filter
Directory of Open Access Journals (Sweden)
Yupeng Huang
2017-01-01
Full Text Available The sideslip angle plays an extremely important role in vehicle stability control, but the sideslip angle in production car cannot be obtained from sensor directly in consideration of the cost of the sensor; it is essential to estimate the sideslip angle indirectly by means of other vehicle motion parameters; therefore, an estimation algorithm with real-time performance and accuracy is critical. Traditional estimation method based on Kalman filter algorithm is correct in vehicle linear control area; however, on low adhesion road, vehicles have obvious nonlinear characteristics. In this paper, extended Kalman filtering algorithm had been put forward in consideration of the nonlinear characteristic of the tire and was verified by the Carsim and Simulink joint simulation, such as the simulation on the wet cement road and the ice and snow road with double lane change. To test and verify the effect of extended Kalman filtering estimation algorithm, the real vehicle test was carried out on the limit test field. The experimental results show that the accuracy of vehicle sideslip angle acquired by extended Kalman filtering algorithm is obviously higher than that acquired by Kalman filtering in the area of the nonlinearity.
[Chronological age estimation based on dental panoramic radiography].
Tóth, Zsuzsanna Olga; Udvar, Orsolya; Angyal, János
2014-09-01
Determination of the dental age is a valuable tool in planning of orthodontic treatment and could be used to estimate the chronological age of unidentified human beings. Among the various age estimation methods one of the most accepted one is the Demirjian method, which has already been modified to selected Hungarian population. In this study we have evaluated the association between the dental age determined by panoramic radiography and the chronological age. 199 panoramic radiographs taken from persons between the ages of 2,8 and 20,3 years were selected to the study. The dental ages of persons were estimated either with the Demirjian or the modified Demirjian method adapted to Hungarian population and the results were compared to the chronological ages in selected age groups. Furthermore the angle of the mandible was registered on both sides with an image analysing software. Statistical analysis of data was performed using SPSS software. Our results show that mean values of mandibular angles exhibited a decreasing trend with age. The two age determination methods resulted in different values. Between 3 and 9 years and the age group between 15 and 17,3 years the adapted Hungarian method proved to be more accurate than the Demirjian method. We have established a mathematical function between the two methods. We could conclude that the panoramic radiography based dental age calculation is a reliable method to estimate the chronological age, but the utility of gonial angle has not been proved.
Ratio-based estimators for a change point in persistence.
Halunga, Andreea G; Osborn, Denise R
2012-11-01
We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.
On soil textural classifications and soil-texture-based estimations
Ángel Martín, Miguel; Pachepsky, Yakov A.; García-Gutiérrez, Carlos; Reyes, Miguel
2018-02-01
The soil texture representation with the standard textural fraction triplet sand-silt-clay is commonly used to estimate soil properties. The objective of this work was to test the hypothesis that other fraction sizes in the triplets may provide a better representation of soil texture for estimating some soil parameters. We estimated the cumulative particle size distribution and bulk density from an entropy-based representation of the textural triplet with experimental data for 6240 soil samples. The results supported the hypothesis. For example, simulated distributions were not significantly different from the original ones in 25 and 85 % of cases when the sand-silt-clay and very coarse+coarse + medium sand - fine + very fine sand - silt+clay were used, respectively. When the same standard and modified triplets were used to estimate the average bulk density, the coefficients of determination were 0.001 and 0.967, respectively. Overall, the textural triplet selection appears to be application and data specific.
Spacecraft Angular Velocity Estimation Algorithm Based on Orientation Quaternion Measurements
Directory of Open Access Journals (Sweden)
M. V. Li
2016-01-01
Full Text Available The spacecraft (SC mission involves providing the appropriate orientation and stabilization of the associated axes in space. One of the main sources of information for the attitude control system is the angular rate sensor blocks. One way to improve a reliability of the system is to provide a back up of the control algorithms in case of failure of these blocks. To solve the problem of estimation of SP angular velocity vector in the inertial system of coordinates with a lack of information from the angular rate sensors is supposed the use of orientation data from the star sensors; in this case at each clock of the onboard digital computer. The equations in quaternions are used to describe the kinematics of rotary motion. Their approximate solution is used to estimate the angular velocity vector. Methods of modal control and multi-dimensional decomposition of a control object are used to solve the problem of observation and identification of the angular rates. These methods enabled us to synthesize the SP angular velocity vector estimation algorithm and obtain the equations, which relate the error quaternion with the calculated estimate of the angular velocity. Mathematical modeling was carried out to test the algorithm. Cases of different initial conditions were simulated. Time between orientation quaternion measurements and angular velocity of the model was varied. The algorithm was compared with a more accurate algorithm, built on more complete equations. Graphs of difference in angular velocity estimation depending on the number of iterations are presented. The difference in angular velocity estimation is calculated from results of the synthesized algorithm and the algorithm for more accurate equations. Graphs of error distribution for angular velocity estimation with initial conditions being changed are also presented, and standard deviations of estimation errors are calculated. The synthesized algorithm is inferior in accuracy assessment to
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Learning Motion Features for Example-Based Finger Motion Estimation for Virtual Characters
Mousas, Christos; Anagnostopoulos, Christos-Nikolaos
2017-09-01
This paper presents a methodology for estimating the motion of a character's fingers based on the use of motion features provided by a virtual character's hand. In the presented methodology, firstly, the motion data is segmented into discrete phases. Then, a number of motion features are computed for each motion segment of a character's hand. The motion features are pre-processed using restricted Boltzmann machines, and by using the different variations of semantically similar finger gestures in a support vector machine learning mechanism, the optimal weights for each feature assigned to a metric are computed. The advantages of the presented methodology in comparison to previous solutions are the following: First, we automate the computation of optimal weights that are assigned to each motion feature counted in our metric. Second, the presented methodology achieves an increase (about 17%) in correctly estimated finger gestures in comparison to a previous method.
Small Area Model-Based Estimators Using Big Data Sources
Directory of Open Access Journals (Sweden)
Marchetti Stefano
2015-06-01
Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.
Marker-based estimation of genetic parameters in genomics.
Directory of Open Access Journals (Sweden)
Zhiqiu Hu
Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.
Simulation-based seismic loss estimation of seaport transportation system
International Nuclear Information System (INIS)
Ung Jin Na; Shinozuka, Masanobu
2009-01-01
Seaport transportation system is one of the major lifeline systems in modern society and its reliable operation is crucial for the well-being of the public. However, past experiences showed that earthquake damage to port components can severely disrupt terminal operation, and thus negatively impact on the regional economy. The main purpose of this study is to provide a methodology for estimating the effects of the earthquake on the performance of the operation system of a container terminal in seaports. To evaluate the economic loss of damaged system, an analytical framework is developed by integrating simulation models for terminal operation and fragility curves of port components in the context of seismic risk analysis. For this purpose, computerized simulation model is developed and verified with actual terminal operation records. Based on the analytical procedure to assess the seismic performance of the terminal, system fragility curves are also developed. This simulation-based loss estimation methodology can be used not only for estimating the seismically induced revenue loss but also serve as a decision-making tool to select specific seismic retrofit technique on the basis of benefit-cost analysis
Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach
Energy Technology Data Exchange (ETDEWEB)
Charles E. Russell; Tim Minor
2002-08-31
Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations
Estimating Selectivity for Current Query of Moving Objects Using Index-Based Histogram
Chi, Jeong Hee; Kim, Sang Ho
Selectivity estimation is one of the query optimization techniques. It is difficult for the previous selectivity estimation techniques for moving objects to apply the location change of moving objects to synopsis. Therefore, they result in much error when estimating selectivity for queries, because they are based on the extended spatial synopsis which does not consider the property of the moving objects. In order to reduce the estimation error, the existing techniques should often rebuild the synopsis. Consequently problem occurs, that is, the whole database should be read frequently. In this paper, we proposed a moving object histogram method based on quadtree to develop a selectivity estimation technique for moving object queries. We then analyzed the performance of the proposed method through the implementation and evaluation of the proposed method. Our method can be used in various location management systems such as vehicle location tracking systems, location based services, telematics services, emergency rescue service, etc in which the location information of moving objects changes over time.
Estimates of future climate based on SRES emission scenarios
Energy Technology Data Exchange (ETDEWEB)
Godal, Odd; Sygna, Linda; Fuglestvedt, Jan S.; Berntsen, Terje
2000-02-14
The preliminary emission scenarios in the Special Report on Emission Scenario (SRES) developed by the Intergovernmental Panel on Climate Change (IPCC), will eventually replace the old IS92 scenarios. By running these scenarios in a simple climate model (SCM) we estimate future temperature increase between 1.7 {sup o}C and 2.8 {sup o}C from 1990 to to 2100. The global sea level rise over the same period is between 0.33 m and 0.45 m. Compared to the previous IPCC scenarios (IS92) the SRES scenarios generally results in changes in both development over time and level of emissions, concentrations, radiative forcing, and finally temperature change and sea level rise. The most striking difference between the IS92 scenarios and the SRES scenarios is the lower level of SO{sub 2} emissions. The range in CO{sub 2} emissions is also expected to be narrower in the new scenarios. The SRES scenarios result in a narrower range both for temperature change and sea level rise from 1990 to 2100 compared to the range estimated for the IS92 scenarios. (author)
Non-intrusive Load Disaggregation Based on Kernel Density Estimation
Sen, Wang; Dongsheng, Yang; Chuchen, Guo; Shengxian, Du
2017-05-01
Aiming at the problem of high cost and difficult implementation of high frequency non-intrusive load decomposition method, this paper proposes a new method based on kernel density estimation(KDE) for low frequency NILM (Non-intrusive load monitoring). The method establishes power reference model of electricity load in different working conditions and appliance’s possible combinations first, then probability distribution is calculated as appliances features by kernel density estimation. After that, target power data is divided by step changes, whose distributions will be compared with reference models, and the most similar reference model will be chosen as the decomposed consequence. The proposed approach was tested with data from the GREEND public data set, it showed better performance in terms of energy disaggregation accuracy compared with many traditional NILM approaches. Our results show good performance which can achieve more than 93% accuracy in simulation.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
Foggy Scene Rendering Based on Transmission Map Estimation
Directory of Open Access Journals (Sweden)
Fan Guo
2014-01-01
Full Text Available Realistic rendering of foggy scene is important in game development and virtual reality. Traditional methods have many parameters to control or require a long time to compute, and they are usually limited to depicting a homogeneous fog without considering the foggy scene with heterogeneous fog. In this paper, a new rendering method based on transmission map estimation is proposed. We first generate perlin noise image as the density distribution texture of heterogeneous fog. Then we estimate the transmission map using the Markov random field (MRF model and the bilateral filter. Finally, virtual foggy scene is realistically rendered with the generated perlin noise image and the transmission map according to the atmospheric scattering model. Experimental results show that the rendered results of our approach are quite satisfactory.
Underwater image enhancement through depth estimation based on random forest
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Estimating Vehicle Stability Region Based on Energy Function
Directory of Open Access Journals (Sweden)
Yu-guang Yan
2015-01-01
Full Text Available In order to improve the deficiency of vehicle stability region, according to vehicle nonlinear dynamic model, method of estimating vehicle spatial stability region was proposed. With Pacejka magic formula tire model, nonlinear 3DOF vehicle model was deduced and verified though vehicle test. Detailed detecting system and data processing were introduced. In addition, stability of the vehicle system was discussed using Hurwitz criterion. By establishing energy function for vehicle system, the vehicle’s stability region in 20 m/s was estimated based on Lyapunov theorem and vehicle system characteristics. Vehicle test in the same condition shows that the calculated stability region defined by Lyapunov and system stability theorem has good effect on characterized vehicle stability and it could be a valuable reference for vehicle stability evaluation.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
A power function method for estimating base flow.
Lott, Darline A; Stewart, Mark T
2013-01-01
Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.
Digital Repository Service at National Institute of Oceanography (India)
Chandramohan, P.; Nayak, B.U.; Raju, N.S.N.
lower values, Gumbel distribution appears to estimate the extreme wave height reasonably well and gives a realistic value for the study region. The extreme wave estimated based only on the monsoon wave data deviated significantly from the estimate based...
Drone based estimation of actual evapotranspiration over different forest types
Marzahn, Philip; Gampe, David; Castro, Saulo; Vega-Araya, Mauricio; Sanchez-Azofeifa, Arturo; Ludwig, Ralf
2017-04-01
Actual evapotranspiration (Eta) plays an important role in surface-atmosphere interactions. Traditionally, Eta is measured by means of lysimeters, eddy-covariance systems or fiber optics, providing estimates which are spatially restricted to a footprint from a few square meters up to several hectares . In the past, several methods have been developed to derive Eta by means of multi-spectral remote sensing data using thermal and VIS/NIR satellite imagery of the land surface. As such approaches do have their justification on coarser scales, they do not provide Eta information on the fine resolution plant level over large areas which is mandatory for the detection of water stress or tree mortality. In this study, we present a comparison of a drone based assessment of Eta with eddy-covariance measurements over two different forest types - a deciduous forest in Alberta, Canada and a tropical dry forest in Costa Rica. Drone based estimates of Eta were calculated applying the Triangle-Method proposed by Jiang and Islam (1999). The Triangle-Method estimates actual evapotranspiration (Eta) by means of the Normalized Difference Vegetation Index (NDVI) and land surface temperature (LST) provided by two camera systems (MicaSense RedEdge, FLIR TAU2 640) flown simultaneously on an octocopter. . Results indicate a high transferability of the original approach from Jiang and Islam (1999) developed for coarse to medium resolution satellite imagery tothe high resolution drone data, leading to a deviation in Eta estimates of 10% compared to the eddy-covariance measurements. In addition, the spatial footprint of the eddy-covariance measurement can be detected with this approach, by showing the spatial heterogeneities of Eta due to the spatial distribution of different trees and understory vegetation.
A History-based Estimation for LHCb job requirements
Rauschmayr, Nathalie
2015-12-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the reduction of the overall memory footprint. Therefore, it also needs to be studied how memory consumption of jobs can be estimated. A detailed workload analysis of past LHCb jobs is presented. It includes a study of job features and their correlation with runtime and memory consumption. Following the features, a supervised learning algorithm is developed based on a history based prediction. The aim is to learn over time how jobs’ runtime and memory evolve influenced due to changes in experiment conditions and software versions. It will be shown that estimation can be notably improved if experiment conditions are taken into account.
DEFF Research Database (Denmark)
Brüel, Annemarie; Nyengaard, Jens Randel
2005-01-01
in LM sections using design-based stereology. MATERIALS AND METHODS: From formalin-fixed left rat ventricles (LV) isotropic uniformly random sections were cut. The total number of myocyte nuclei per LV was estimated using the optical disector. Two-microm-thick serial paraffin sections were stained......BACKGROUND: Counting the total number of cardiac myocytes has not previously been possible in ordinary histological sections using light microscopy (LM) due to difficulties in defining the myocyte borders properly. AIM: To describe a method by which the total number of cardiac myocytes is estimated...... with antibodies against cadherin and type IV collagen to visualise the intercalated discs and the myocyte membranes, respectively. Using the physical disector in "local vertical windows" of the serial sections, the average number of nuclei per myocyte was estimated.RESULTS: The total number of myocyte nuclei...
International Nuclear Information System (INIS)
Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio
2001-01-01
An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment
Genetic Algorithm-based Affine Parameter Estimation for Shape Recognition
Directory of Open Access Journals (Sweden)
Yuxing Mao
2014-06-01
Full Text Available Shape recognition is a classically difficult problem because of the affine transformation between two shapes. The current study proposes an affine parameter estimation method for shape recognition based on a genetic algorithm (GA. The contributions of this study are focused on the extraction of affine-invariant features, the individual encoding scheme, and the fitness function construction policy for a GA. First, the affine-invariant characteristics of the centroid distance ratios (CDRs of any two opposite contour points to the barycentre are analysed. Using different intervals along the azimuth angle, the different numbers of CDRs of two candidate shapes are computed as representations of the shapes, respectively. Then, the CDRs are selected based on predesigned affine parameters to construct the fitness function. After that, a GA is used to search for the affine parameters with optimal matching between candidate shapes, which serve as actual descriptions of the affine transformation between the shapes. Finally, the CDRs are resampled based on the estimated parameters to evaluate the similarity of the shapes for classification. The experimental results demonstrate the robust performance of the proposed method in shape recognition with translation, scaling, rotation and distortion.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Directory of Open Access Journals (Sweden)
Ömer Faruk Erin
2016-03-01
Full Text Available Objective: This study was designed to longitudinally demonstrate the rate and epidemiology of hospitalized burn patients in Sivas city center within 6 months. The second aim was to compare the results of the current study with those of a previously held community-based survey in the same region. Material and Methods: Patients who were hospitalized due to burn injuries in Sivas city for six months were longitudinally evaluated. Epidemiological data of these patients were analyzed. Results: During the course of the study, 87 patients (49 males and 38 females were hospitalized. The ratio of burn patients to the total number of hospitalized patients was 0.38%. The most common etiologic factor was scalds (70.1%. Burns generally took place in the kitchen (41.4% and living room (31.4%, and majority of the patients received cold water as first-aid treatment at the time of injury. The vast majority of patients were discharged from the hospital without the need of surgical intervention (83.9%, and the duration of treatment was between 1 and 14 days for 73.6% of the patients. Sixty patients (68.9% had a total burn surface area under 10%. The total cost of the hospitalization period of these patients was 137.225 Turkish Lira (83.308–92.908$, and the average cost per patient was 1.577 Turkish Lira (957–1067$. Conclusion: Our study revealed a considerable inconsistency when compared with the results of the community-based survey, which had been previously conducted in the same region. We concluded that hospital-based studies are far from reflecting the actual burn trauma potential of a given district in the absence of a reliable, standard, nation-wide record system. Population-based surveys should be encouraged to make an accurate assessment of burn rates in countries lacking reliable record systems.
FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC
Directory of Open Access Journals (Sweden)
Obianuju Ndili
2009-01-01
Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.
Supersensitive ancilla-based adaptive quantum phase estimation
Larson, Walker; Saleh, Bahaa E. A.
2017-10-01
The supersensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots—phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point to a less vulnerable phase value. Since this is not always feasible, we present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally supersensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.
REDRAW-Based Evapotranspiration Estimation in Chongli, North China
Zhang, Z.; Wang, Z.
2017-12-01
Evapotranspiration (ET) is the key component of hydrological cycle and spatial estimates of ET are important elements of atmospheric circulation and hydrologic models. Quantifying the ET over large region is significant for water resources planning, hydrologic water balances, water rights management, and water division. In this study, Evapotranspiration (ET) was estimated using REDRAW model in the Chongli on 2014. REDRAW is a satellite-based balance algorithm with reference dry and wet limits model developed to estimate ET. Remote sensing data obtained from MODIS and meteorological data from China Meteorological Data Sharing Service System were used in ET model. In order to analyze the distribution and time variation of ET over the study region, daily, monthly and yearly ET were calculated for the study area, and ET of different land cover types were calculated. In terms of the monthly ET, the figure was low in winter and high in other seasons, and reaches the maximum value in August, showing a high monthly difference. The ET value of water body was the highest and that of barren or sparse vegetation were the lowest, which accorded with local actual condition. Evaluating spatial temporal distribution of actual ET could assist to understand the water consumption regularity in region and figure out the effect from different land cover, which helped to establish links between land use, water allocation, and water use planning in study region. Due to the groundwater recession in north China, the evaluation of regional total water resources become increasingly essential, and the result of this study can be used to plan the water use. As the Chongli will prepare the ski slopes for Winter Olympics on 2022, accuracy estimation of actual ET can efficiently resolve water conflict and relieve water scarcity.
Laparoscopy After Previous Laparotomy
Directory of Open Access Journals (Sweden)
Zulfo Godinjak
2006-11-01
Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.
Taylor, Terence E; Lacalle Muls, Helena; Costello, Richard W; Reilly, Richard B
2018-01-01
Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be
Cloud Base Height Estimation from ISCCP Cloud-Type Classification Applied to A-Train Data
Directory of Open Access Journals (Sweden)
Yao Liang
2017-01-01
Full Text Available Cloud base height (CBH is an important cloud macro parameter that plays a key role in global radiation balance and aviation flight. Building on a previous algorithm, CBH is estimated by combining measurements from CloudSat/CALIPSO and MODIS based on the International Satellite Cloud Climatology Project (ISCCP cloud-type classification and a weighted distance algorithm. Additional constraints on cloud water path (CWP and cloud top height (CTH are introduced. The combined algorithm takes advantage of active and passive remote sensing to effectively estimate CBH in a wide-swath imagery where the cloud vertical structure details are known only along the curtain slice of the nonscanning active sensors. Comparisons between the estimated and observed CBHs show high correlation. The coefficient of association (R2 is 0.8602 with separation distance between donor and recipient points in the range of 0 to 100 km and falls off to 0.5856 when the separation distance increases to the range of 401 to 600 km. Also, differences are mainly within 1 km when separation distance ranges from 0 km to 600 km. The CBH estimation method was applied to the 3D cloud structure of Tropical Cyclone Bill, and the method is further assessed by comparing CTH estimated by the algorithm with the MODIS CTH product.
Estimating cetacean carrying capacity based on spacing behaviour.
Directory of Open Access Journals (Sweden)
Janelle E Braithwaite
Full Text Available Conservation of large ocean wildlife requires an understanding of how they use space. In Western Australia, the humpback whale (Megaptera novaeangliae population is growing at a minimum rate of 10% per year. An important consideration for conservation based management in space-limited environments, such as coastal resting areas, is the potential expansion in area use by humpback whales if the carrying capacity of existing areas is exceeded. Here we determined the theoretical carrying capacity of a known humpback resting area based on the spacing behaviour of pods, where a resting area is defined as a sheltered embayment along the coast. Two separate approaches were taken to estimate this distance. The first used the median nearest neighbour distance between pods in relatively dense areas, giving a spacing distance of 2.16 km (± 0.94. The second estimated the spacing distance as the radius at which 50% of the population included no other pods, and was calculated as 1.93 km (range: 1.62-2.50 km. Using these values, the maximum number of pods able to fit into the resting area was 698 and 872 pods, respectively. Given an average observed pod size of 1.7 whales, this equates to a carrying capacity estimate of between 1187 and 1482 whales at any given point in time. This study demonstrates that whale pods do maintain a distance from each other, which may determine the number of animals that can occupy aggregation areas where space is limited. This requirement for space has implications when considering boundaries for protected areas or competition for space with the fishing and resources sectors.
Estimation of fire emissions from satellite-based measurements
Ichoku, C. M.; Kaufman, Y. J.
2004-12-01
Biomass burning is a worldwide phenomenon affecting many vegetated parts of the globe regularly. Fires emit large quantities of aerosol and trace gases into the atmosphere, thus influencing the atmospheric chemistry and climate. Traditional methods of fire emissions estimation achieved only limited success, because they were based on peripheral information such as rainfall patterns, vegetation types and changes, agricultural practices, and surface ozone concentrations. During the last several years, rapid developments in satellite remote sensing has allowed more direct estimation of smoke emissions using remotely-sensed fire data. However, current methods use fire pixel counts or burned areas, thereby depending on the accuracy of independent estimations of the biomass fuel loadings, combustion efficiency, and emission factors. With the enhanced radiometric range of its 4-micron fire channel, the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor, which flies aboard both of the Earth Observing System (EOS) Terra and Aqua Satellites, is able to measure the rate of release of fire radiative energy (FRE) in MJ/s (something that older sensors could not do). MODIS also measures aerosol distribution. Taking advantage of these new resources, we have developed a procedure combining MODIS fire and aerosol products to derive FRE-based smoke emission coefficients (Ce in kg/MJ) for different regions of the globe. These coefficients are simply used to multiply FRE from MODIS to derive the emitted smoke aerosol mass. Results from this novel methodology are very encouraging. For instance, it was found that the smoke total particulate mass emission coefficient for the Brazilian Cerrado ecosystem (approximately 0.022 kg/MJ) is about twice the value for North America or Australia, but about 50 percent lower than the value for Zambia in southern Africa.
Flexible Triangle Search Algorithm for Block-Based Motion Estimation
Directory of Open Access Journals (Sweden)
Andreas Antoniou
2007-01-01
Full Text Available A new fast algorithm for block-based motion estimation, the flexible triangle search (FTS algorithm, is presented. The algorithm is based on the simplex method of optimization adapted to an integer grid. The proposed algorithm is highly flexible due to its ability to quickly change its search direction and to move towards the target of the search criterion. It is also capable of increasing or decreasing its search step size to allow coarser or finer search. Unlike other fast search algorithms, the FTS can escape from inferior local minima and thus converge to better solutions. The FTS was implemented as part of the H.264 encoder and was compared with several other block matching algorithms. The results obtained show that the FTS can reduce the number of block matching comparisons by around 30–60% with negligible effect on the image quality and compression ratio.
MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming
Directory of Open Access Journals (Sweden)
Yuteng Xiao
2017-01-01
Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.
A frequency-based parameter for rapid estimation of magnitude
Atefi, Sanam; Heidari, Reza; Mirzaei, Noorbakhsh; Siahkoohi, Hamid Reza
2017-12-01
This study introduce a new frequency parameter called τ_{fcwt}, which can be used to estimate earthquake magnitude on the basis of the first few seconds of P-waves, using the waveforms of earthquakes occurring in Japan. This new parameter is introduced using continuous wavelet transform as a tool for extracting the frequency contents carried by the first few seconds of P-wave. The empirical relationship between the logarithm of τ_{fcwt} within the initial 4 s of a waveform and magnitude was obtained. To evaluate the precision of τ_{fcwt}, we also calculated parameters τp^{ max } and τc. The average absolute values of observed and estimated magnitude differences (|M_{est} - M_{obs} |) were 0.43, 0.49, and 0.66 units of magnitude, as determined using τp^{ max }, τc, and τ_{fcwt}, respectively. For earthquakes with magnitudes greater than 6, these values were 0.34, 0.56, and 0.44 units of magnitude, as derived using τp^{ max }, τc, and τ_{fcwt}, respectively. The τ_{fcwt} parameter exhibited more precision in determining the magnitude of moderate- and small-scale earthquakes than did the τc-based approach. For a general range of magnitudes, however, the τp^{ max }-based method showed more acceptable precision than did the other two parameters.
Dynamic soft tissue deformation estimation based on energy analysis
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful
International Nuclear Information System (INIS)
Li, Chengyu; Shao, Shuai; Yang, Lili; Yu, Mingliang
2016-01-01
Du and Lin (2015) argued that the estimation model of the economy-wide energy rebound effect proposed by Shao et al. (2014) should be revised and provided an alternative approach, which they considered to be more consistent with the definition of the rebound effect. However, in this comment, we do not find a valid correction or modification to our original model, because their criticism logic does not originate from the corresponding mechanism in Shao et al. (2014), and their estimation formula has a different benchmark with ours. Moreover, their data samples were also different from ours, generating the incomparable results, and there are some irrational results in the comment. Even based on different estimation formulas in the two studies and using the same estimation method and data sample, the comparison results show that the problem of the estimation formula in our previous study which they claimed does not really exist. We argue that this comment is not consistent with the principle of the rebound effect. Actually, their work can be only regarded as proposing an alternative approach for the estimate of the rebound effect. Therefore, their argument is not enough to overturn our previous study. - Highlights: • A reply to Du and Lin (2015), who questioned our previous study, is provided. • Their criticism logic does not originate from our corresponding mechanism. • Their estimation formula has a different benchmark with ours. • Different data samples in the two papers make their results incomparable. • Their argument is not enough to overturn our previous study.
DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA
Huang, G; Nix, AR; Armour, SMD
2010-01-01
Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Model Based Analysis of the Variance Estimators for the Combined ...
African Journals Online (AJOL)
In this paper we study the variance estimators for the combined ratio estimator under an appropriate asymptotic framework. An alternative bias-robust variance estimator, different from that suggested by Valliant (1987), is derived. Several variance estimators are compared in an empirical study using a real population.
On the Estimation of Heritability with Family-Based and Population-Based Samples
Directory of Open Access Journals (Sweden)
Youngdoe Kim
2015-01-01
Full Text Available For a family-based sample, the phenotypic variance-covariance matrix can be parameterized to include the variance of a polygenic effect that has then been estimated using a variance component analysis. However, with the advent of large-scale genomic data, the genetic relationship matrix (GRM can be estimated and can be utilized to parameterize the variance of a polygenic effect for population-based samples. Therefore narrow sense heritability, which is both population and trait specific, can be estimated with both population- and family-based samples. In this study we estimate heritability from both family-based and population-based samples, collected in Korea, and the heritability estimates from the pooled samples were, for height, 0.60; body mass index (BMI, 0.32; log-transformed triglycerides (log TG, 0.24; total cholesterol (TCHL, 0.30; high-density lipoprotein (HDL, 0.38; low-density lipoprotein (LDL, 0.29; systolic blood pressure (SBP, 0.23; and diastolic blood pressure (DBP, 0.24. Furthermore, we found differences in how heritability is estimated—in particular the amount of variance attributable to common environment in twins can be substantial—which indicates heritability estimates should be interpreted with caution.
The Software Cost Estimation Method Based on Fuzzy Ontology
Directory of Open Access Journals (Sweden)
Plecka Przemysław
2014-12-01
Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.
Accurate tempo estimation based on harmonic + noise decomposition
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification
Directory of Open Access Journals (Sweden)
Lili Yang
2016-01-01
Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.
Real-time yield estimation based on deep learning
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
Capacitance Online Estimation Based on Adaptive Model Observer
Directory of Open Access Journals (Sweden)
Cen Zhaohui
2016-01-01
Full Text Available As a basic component in electrical and electronic devices, capacitors are very popular in electrical circuits. Conventional capacitors such as electrotype capacitors are easy to degradation, aging and fatigue due to long‐time running and outer damages such as mechanical and electrical stresses. In this paper, a novel online capacitance measurement/estimation approach is proposed. Firstly, an Adaptive Model Observer (AMO is designed based on the capacitor's circuit equations. Secondly, the AMO’s stability and convergence are analysed and discussed. Finally, Capacitors with different capacitance and different initial voltages in a buck converter topology are tested and validated. Simulation results demonstrate the effectiveness and superiority of our proposed approach.
Image Jacobian Matrix Estimation Based on Online Support Vector Regression
Directory of Open Access Journals (Sweden)
Shangqin Mao
2012-10-01
Full Text Available Research into robotics visual servoing is an important area in the field of robotics. It has proven difficult to achieve successful results for machine vision and robotics in unstructured environments without using any a priori camera or kinematic models. In uncalibrated visual servoing, image Jacobian matrix estimation methods can be divided into two groups: the online method and the offline method. The offline method is not appropriate for most natural environments. The online method is robust but rough. Moreover, if the images feature configuration changes, it needs to restart the approximating procedure. A novel approach based on an online support vector regression (OL-SVR algorithm is proposed which overcomes the drawbacks and combines the virtues just mentioned.
SEE rate estimation based on diffusion approximation of charge collection
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Disk storage management for LHCb based on Data Popularity estimator
INSPIRE-00545541; Charpentier, Philippe; Ustyuzhanin, Andrey
2015-12-23
This paper presents an algorithm providing recommendations for optimizing the LHCb data storage. The LHCb data storage system is a hybrid system. All datasets are kept as archives on magnetic tapes. The most popular datasets are kept on disks. The algorithm takes the dataset usage history and metadata (size, type, configuration etc.) to generate a recommendation report. This article presents how we use machine learning algorithms to predict future data popularity. Using these predictions it is possible to estimate which datasets should be removed from disk. We use regression algorithms and time series analysis to find the optimal number of replicas for datasets that are kept on disk. Based on the data popularity and the number of replicas optimization, the algorithm minimizes a loss function to find the optimal data distribution. The loss function represents all requirements for data distribution in the data storage system. We demonstrate how our algorithm helps to save disk space and to reduce waiting times ...
Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm
Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd
2018-04-01
Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.
Implementation of D-Spline-Based Incremental Performance Parameter Estimation Method with ppOpen-AT
Directory of Open Access Journals (Sweden)
Teruo Tanaka
2014-01-01
Full Text Available In automatic performance tuning (AT, a primary aim is to optimize performance parameters that are suitable for certain computational environments in ordinary mathematical libraries. For AT, an important issue is to reduce the estimation time required for optimizing performance parameters. To reduce the estimation time, we previously proposed the Incremental Performance Parameter Estimation method (IPPE method. This method estimates optimal performance parameters by inserting suitable sampling points that are based on computational results for a fitting function. As the fitting function, we introduced d-Spline, which is highly adaptable and requires little estimation time. In this paper, we report the implementation of the IPPE method with ppOpen-AT, which is a scripting language (set of directives with features that reduce the workload of the developers of mathematical libraries that have AT features. To confirm the effectiveness of the IPPE method for the runtime phase AT, we applied the method to sparse matrix–vector multiplication (SpMV, in which the block size of the sparse matrix structure blocked compressed row storage (BCRS was used for the performance parameter. The results from the experiment show that the cost was negligibly small for AT using the IPPE method in the runtime phase. Moreover, using the obtained optimal value, the execution time for the mathematical library SpMV was reduced by 44% on comparing the compressed row storage and BCRS (block size 8.
Estimation of Alcohol Concentration of Red Wine Based on Cole-Cole Plot
Watanabe, Kota; Taka, Yoshinori; Fujiwara, Osamu
To evaluate the quality of wine, we previously measured the complex relative permittivity of wine in the frequency range from 10 MHz to 6 GHz with a network analyzer, and suggested a possibility that the maturity and alcohol concentration of wine can simultaneously be estimated from the Cole-Cole plot. Although the absolute accuracy has not been examined yet, this method will enable one to estimate the alcohol concentration of alcoholic beverages without any distillation equipment simply. In this study, to investigate the estimation accuracy of the alcohol concentration of wine by its Cole-Cole plots, we measured the complex relative permittivity of pure water and diluted ethanol solution from 100 MHz to 40 GHz, and obtained the dependence of the Cole-Cole plot parameters on alcohol concentration and temperature. By using these results as calibration data, we estimated the alcohol concentration of red wine from the Cole-Cole plots, which was compared with the measured one based on a distillation method. As a result, we have confirmed that the estimated alcohol concentration of red wine agrees with the measured results in an absolute error by less than 1 %.
Ferreira, Natália Noronha; Perez, Taciane Alvarenga; Pedreiro, Liliane Neves; Prezotti, Fabíola Garavello; Boni, Fernanda Isadora; Cardoso, Valéria Maria de Oliveira; Venâncio, Tiago; Gremião, Maria Palmira Daflon
2017-10-01
This work aimed to develop a calcium alginate hydrogel as a pH responsive delivery system for polymyxin B (PMX) sustained-release through the vaginal route. Two samples of sodium alginate from different suppliers were characterized. The molecular weight and M/G ratio determined were, approximately, 107 KDa and 1.93 for alginate_S and 32 KDa and 1.36 for alginate_V. Polymer rheological investigations were further performed through the preparation of hydrogels. Alginate_V was selected for subsequent incorporation of PMX due to the acquisition of pseudoplastic viscous system able to acquiring a differential structure in simulated vaginal microenvironment (pH 4.5). The PMX-loaded hydrogel (hydrogel_PMX) was engineered based on polyelectrolyte complexes (PECs) formation between alginate and PMX followed by crosslinking with calcium chloride. This system exhibited a morphology with variable pore sizes, ranging from 100 to 200 μm and adequate syringeability. The hydrogel liquid uptake ability in an acid environment was minimized by the previous PECs formation. In vitro tests evidenced the hydrogels mucoadhesiveness. PMX release was pH-dependent and the system was able to sustain the release up to 6 days. A burst release was observed at pH 7.4 and drug release was driven by an anomalous transport, as determined by the Korsmeyer-Peppas model. At pH 4.5, drug release correlated with Weibull model and drug transport was driven by Fickian diffusion. The calcium alginate hydrogels engineered by the previous formation of PECs showed to be a promising platform for sustained release of cationic drugs through vaginal administration.
Family-Oriented Cardiac Risk Estimator: A Java Web-Based Applet
Crouch, Michael A.; Jadhav, Ashwin
2003-01-01
We developed a Java applet that calculates four different estimates of a person’s 10-year risk for heart attack: (1) Estimate based on Framingham equation (2) Framingham equation estimate modified by C-reactive protein (CRP) level (3) Framingham estimate modified by family history of heart disease in parents or siblings (4) Framingham estimate modified by both CRP and family heart disease history This web-based, family-oriented cardiac risk estimator uniquely considers family history and CRP ...
Analysis of flux estimates based on C-13-labelling experiments
DEFF Research Database (Denmark)
Christensen, Bjarke; Gombert, Andreas Karoly; Nielsen, Jens
2002-01-01
metabolism were estimated. In the various samples, the estimates of the central metabolic pathways, the tricarboxylic acid cycle, the oxidative pentose phosphate pathway and the anaplerotic pathway, showed an unprecedented reproducibility. The high reproducibility was obtained with fractional labellings...
Fast and Robust CD and DGD Estimation Based on Data-Aided Channel Estimation
DEFF Research Database (Denmark)
Pittalà, Fabio; Hauske, Fabian N.; Ye, Yabin
2011-01-01
In this paper data-aided (DA) frequency domain (FD) channel estimation in a 2×2 multi-input-multi-output (MIMO) system is investigated. Using orthogonal training sequences, fast and robust CD and DGD estimation is demonstrated for a 112 Gbit/s PDM-QPSK system over a wide range of combined linear...
Frequency domain based LS channel estimation in OFDM based Power line communications
Bogdanović, Mario
2015-01-01
This paper is focused on low voltage power line communication (PLC) realization with an emphasis on channel estimation techniques. The Orthogonal Frequency Division Multiplexing (OFDM) scheme is preferred technology in PLC systems because of its effective combat with frequency selective fading properties of PLC channel. As the channel estimation is one of the crucial problems in OFDM based PLC system because of a problematic area of PLC signal attenuation and interference, the improved LS est...
A Derivative Based Estimator for Semiparametric Index Models
Donkers, A.C.D.; Schafgans, M.
2003-01-01
This paper proposes a semiparametric estimator for single- and multiple index models.It provides an extension of the average derivative estimator to the multiple index model setting.The estimator uses the average of the outer product of derivatives and is shown to be root-N consistent and
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
EEG-based Workload Estimation Across Affective Contexts
Directory of Open Access Journals (Sweden)
Christian eMühl
2014-06-01
Full Text Available Workload estimation from electroencephalographic signals (EEG offers a highly sensitive tool to adapt the human-computer interaction to the user state. To create systems that reliably work in the complexity of the real world, a robustness against contextual changes (e.g., mood, has to be achieved. To study the resilience of state-of-the-art EEG-based workload classification against stress we devise a novel experimental protocol, in which we manipulated the affective context (stressful/non-stressful while the participant solved a task with 2 workload levels. We recorded self-ratings, behavior, and physiology from 24 participants to validate the protocol. We test the capability of different, subject-specific workload classifiers using either frequency-domain, time-domain, or both feature varieties to generalize across contexts. We show that the classifiers are able to transfer between affective contexts, though performance suffers independent of the used feature domain. However, cross-context training is a simple and powerful remedy allowing the extraction of features in all studied feature varieties that are more resilient to task-unrelated variations in signal characteristics. Especially for frequency-domain features, across-context training is leading to a performance comparable to within-context training and testing. We discuss the significance of the result for neurophysiology-based workload detection in particular and for the construction of reliable passive brain-computer interfaces in general.
Center of Mass-Based Adaptive Fast Block Motion Estimation
Directory of Open Access Journals (Sweden)
Yeh Kuo-Liang
2007-01-01
Full Text Available This work presents an efficient adaptive algorithm based on center of mass (CEM for fast block motion estimation. Binary transform, subsampling, and horizontal/vertical projection techniques are also proposed. As the conventional CEM calculation is computationally intensive, binary transform and subsampling approaches are proposed to simplify CEM calculation; the binary transform center of mass (BITCEM is then derived. The BITCEM motion types are classified by percentage of (0,0 BITCEM motion vectors. Adaptive search patterns are allocated according to the BITCEM moving direction and the BITCEM motion type. Moreover, the BITCEM motion vector is utilized as the initial search point for near-still or slow BITCEM motion types. To support the variable block sizes, the horizontal/vertical projections of a binary transformed macroblock are utilized to determine whether the block requires segmentation. Experimental results indicate that the proposed algorithm is better than the five conventional algorithms, that is, three-step search (TSS, new three-step search (N3SS, four three-step search (4SS, block-based gradient decent search (BBGDS, and diamond search (DS, in terms of speed or picture quality for eight benchmark sequences.
Human action recognition based on estimated weak poses
Gong, Wenjuan; Gonzàlez, Jordi; Roca, Francesc Xavier
2012-12-01
We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.
Application of genetic algorithm to hexagon-based motion estimation.
Kung, Chih-Ming; Cheng, Wan-Shu; Jeng, Jyh-Horng
2014-01-01
With the improvement of science and technology, the development of the network, and the exploitation of the HDTV, the demands of audio and video become more and more important. Depending on the video coding technology would be the solution for achieving these requirements. Motion estimation, which removes the redundancy in video frames, plays an important role in the video coding. Therefore, many experts devote themselves to the issues. The existing fast algorithms rely on the assumption that the matching error decreases monotonically as the searched point moves closer to the global optimum. However, genetic algorithm is not fundamentally limited to this restriction. The character would help the proposed scheme to search the mean square error closer to the algorithm of full search than those fast algorithms. The aim of this paper is to propose a new technique which focuses on combing the hexagon-based search algorithm, which is faster than diamond search, and genetic algorithm. Experiments are performed to demonstrate the encoding speed and accuracy of hexagon-based search pattern method and proposed method.
Column density estimation: Tree-based method implementation
Valdivia, Valeska
2013-07-01
The radiative transfer plays a crucial role in several astrophysical processes. In particular for the star formation problem it is well established that stars form in the densest and coolest regions in molecular clouds then understanding the interstellar cycle becomes crucial. The physics of dense gas requires the knowledge of the UV radiation that regulates the physics and the chemistry within the molecular cloud. The numerical modelization needs the calculation of column densities in any direction for each resolution element. In numerical simulations the cost of solving the radiative transfer problem is of the order of N^5/3, where N is the number of resolution elements. The exact calculation is in general extremely expensive in terms of CPU time for relatively large simulations and impractical in parallel computing. We present our tree-based method for estimating column densities and the attenuation factor for the UV field. The method is inspired by the fact that any distant cell subtends a small angle and therefore its contribution to the screening will be diluted. This method is suitable for parallel computing and no communication is needed between different CPUs. It has been implemented into the RAMSES code, a grid-based solver with adaptive mesh refinement (AMR). We present the results of two tests and a discussion on the accuracy and the performance of this method. We show that the UV screening affects mainly the dense parts of molecular clouds, changing locally the Jeans mass and therefore affecting the fragmentation.
A quantitative approach for sex estimation based on cranial morphology.
Nikita, Efthymia; Michopoulou, Efrossyni
2018-03-01
This paper proposes a method for the quantification of the shape of sexually dimorphic cranial traits, namely the glabella, mastoid process and external occipital protuberance. The proposed method was developed using 165 crania from the documented Athens Collection and tested on 20 Cretan crania. It is based on digital photographs of the lateral view of the cranium, drawing of the profile of three sexually dimorphic structures and calculation of variables that express the shape of these structures. The combinations of variables that provide optimum discrimination between sexes are identified by means of binary logistic regression and discriminant analysis. The best cross-validated results are obtained when variables from all three structures are combined and range from 75.8 to 85.1% and 81.1 to 94.6% for males and females, respectively. The success rate is 86.3-94.1% for males and 83.9-93.5% for females when half of the sample is used for training and the rest for prediction. Correct classification for the Cretan material based upon the standards developed for the Athens sample was 80-90% for the optimum combinations of discriminant variables. The proposed method provides an effective way to capture quantitatively the shape of sexually dimorphic cranial structures; it gives more accurate results relative to other existing methods and it does not require specialized equipment. Equations for sex estimation based on combinations of variables are provided, along with instructions on how to use the method and Excel macros for calculation of discriminant variables with automated implementation of the optimum equations. © 2017 Wiley Periodicals, Inc.
Improved image registration by sparse patch-based deformation estimation.
Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang
2015-01-15
Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we
Directory of Open Access Journals (Sweden)
Meng Bao-Ping
2017-01-01
Full Text Available Animal husbandry is the main agricultural type over the Tibetan Plateau, above ground biomass (AGB is very important to monitor the productivity for administration of grassland resources and grazing balance. The MODIS vegetation indices have been successfully used in numerous studies on grassland AGB estimation in the Tibetan Plateau area. However, there are considerable differences of AGB estimation models both in the form of the models and the accuracy of estimation. In this study, field measurements of AGB data at Sangke Town, Gansu Province, China in four years (2013-2016 and MODIS indices (NDVI and EVI are combined to construct AGB estimation models of alpine meadow grassland. The field measured AGB are also used to evaluate feasibility of models developed for large scale in applying to small area. The results show that (1 the differences in biomass were relatively large among the 5 sample areas of alpine meadow grassland in the study area during 2013-2016, with the maximum and minimum biomass values of 3,963 kg DW/ha and 745.5 kg DW/ha, respectively, and mean value of 1,907.7 kg DW/ha; the mean of EVI value range (0.42-0.60 are slightly smaller than the NDVI’s (0.59-0.75; (2 the optimum estimation model of grassland AGB in the study area is the exponential model based on MODIS EVI, with root mean square error of 656.6 kg DW/ha and relative estimation errors (REE of 36.3%; (3 the estimation errors of grassland AGB models previously constructed at different spatial scales (the Tibetan Plateau, the Gannan Prefecture, and Xiahe County are higher than those directly constructed based on the small area of this study by 9.5%–31.7%, with the increase of the modeling study area scales, the REE increasing as well. This study presents an improved monitoring algorithm of alpine natural grassland AGB estimation and provides a clear direction for future improvement of the grassland AGB estimation and grassland productivity from remote sensing
Directory of Open Access Journals (Sweden)
Tao He
2015-05-01
Full Text Available Monitoring surface albedo at medium-to-fine resolution (<100 m has become increasingly important for medium-to-fine scale applications and coarse-resolution data evaluation. This paper presents a method for estimating surface albedo directly using top-of-atmosphere reflectance. This is the first attempt to derive surface albedo for both snow-free and snow-covered conditions from medium-resolution data with a single approach. We applied this method to the multispectral data from the wide-swath Chinese HuanJing (HJ satellites at a spatial resolution of 30 m to demonstrate the feasibility of this data for surface albedo monitoring over rapidly changing surfaces. Validation against ground measurements shows that the method is capable of accurately estimating surface albedo over both snow-free and snow-covered surfaces with an overall root mean square error (RMSE of 0.030 and r-square (R2 of 0.947. The comparison between HJ albedo estimates and the Moderate Resolution Imaging Spectral Radiometer (MODIS albedo product suggests that the HJ data and proposed algorithm can generate robust albedo estimates over various land cover types with an RMSE of 0.011–0.014. The accuracy of HJ albedo estimation improves with the increase in view zenith angles, which further demonstrates the unique advantage of wide-swath satellite data in albedo estimation.
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Ben Slama, Amine; Mouelhi, Aymen; Sahli, Hanene; Manoubi, Sondes; Mbarek, Chiraz; Trabelsi, Hedi; Fnaiech, Farhat; Sayadi, Mounir
2017-07-01
The diagnostic of the vestibular neuritis (VN) presents many difficulties to traditional assessment methods This paper deals with a fully automatic VN diagnostic system based on nystagmus parameter estimation using a pupil detection algorithm. A geodesic active contour model is implemented to find an accurate segmentation region of the pupil. Hence, the novelty of the proposed algorithm is to speed up the standard segmentation by using a specific mask located on the region of interest. This allows a drastically computing time reduction and a great performance and accuracy of the obtained results. After using this fast segmentation algorithm, the obtained estimated parameters are represented in temporal and frequency settings. A useful principal component analysis (PCA) selection procedure is then applied to obtain a reduced number of estimated parameters which are used to train a multi neural network (MNN). Experimental results on 90 eye movement videos show the effectiveness and the accuracy of the proposed estimation algorithm versus previous work. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of satellite-based evapotranspiration estimates in China
Huang, Lei; Li, Zhe; Tang, Qiuhong; Zhang, Xuejun; Liu, Xingcai; Cui, Huijuan
2017-04-01
Accurate and continuous estimation of evapotranspiration (ET) is crucial for effective water resource management. We used the moderate resolution imaging spectroradiometer (MODIS) standard ET algorithm forced by the MODIS land products and the three-hourly solar radiation datasets to estimate daily actual evapotranspiration of China (ET_MOD) for the years 2001 to 2015. From the point scale validations using seven eddy covariance tower sites, the results showed that the agreement of ET_MOD estimates and observations was higher for monthly and daily values than that of instantaneous values. Under the major river basin and subbasin levels' comparisons with the variable infiltration capacity hydrological model estimates, the ET_MOD exhibited a slight overestimation in northern China and underestimation in southern China. The mean annual ET_MOD estimates agreed favorably with the hydrological model with coefficients of determination (R2) of 0.93 and 0.83 at major river basin and subbasin scale, respectively. At national scale, the spatiotemporal variations of ET_MOD estimates matched well with those ET estimates from various sources. However, ET_MOD estimates were generally lower than the other estimates in the Tibetan Plateau. This underestimation may be attributed to the plateau climate along with low air temperature and sparsely vegetated surface on the Tibetan Plateau.
A Modelling Framework for estimating Road Segment Based On-Board Vehicle Emissions
International Nuclear Information System (INIS)
Lin-Jun, Yu; Ya-Lan, Liu; Yu-Huan, Ren; Zhong-Ren, Peng; Meng, Liu Meng
2014-01-01
Traditional traffic emission inventory models aim to provide overall emissions at regional level which cannot meet planners' demand for detailed and accurate traffic emissions information at the road segment level. Therefore, a road segment-based emission model for estimating light duty vehicle emissions is proposed, where floating car technology is used to collect information of traffic condition of roads. The employed analysis framework consists of three major modules: the Average Speed and the Average Acceleration Module (ASAAM), the Traffic Flow Estimation Module (TFEM) and the Traffic Emission Module (TEM). The ASAAM is used to obtain the average speed and the average acceleration of the fleet on each road segment using FCD. The TFEM is designed to estimate the traffic flow of each road segment in a given period, based on the speed-flow relationship and traffic flow spatial distribution. Finally, the TEM estimates emissions from each road segment, based on the results of previous two modules. Hourly on-road light-duty vehicle emissions for each road segment in Shenzhen's traffic network are obtained using this analysis framework. The temporal-spatial distribution patterns of the pollutant emissions of road segments are also summarized. The results show high emission road segments cluster in several important regions in Shenzhen. Also, road segments emit more emissions during rush hours than other periods. The presented case study demonstrates that the proposed approach is feasible and easy-to-use to help planners make informed decisions by providing detailed road segment-based emission information
Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions
Poon, P.; Kinoshita, A. M.
2017-12-01
Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.
Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy
Kniffin, Gabriel Paul
Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.
Quantitative tectonic reconstructions of Zealandia based on crustal thickness estimates
Grobys, Jan W. G.; Gohl, Karsten; Eagles, Graeme
2008-01-01
Zealandia is a key piece in the plate reconstruction of Gondwana. The positions of its submarine plateaus are major constraints on the best fit and breakup involving New Zealand, Australia, Antarctica, and associated microplates. As the submarine plateaus surrounding New Zealand consist of extended and highly extended continental crust, classic plate tectonic reconstructions assuming rigid plates and narrow plate boundaries fail to reconstruct these areas correctly. However, if the early breakup history shall be reconstructed, it is crucial to consider crustal stretching in a plate-tectonic reconstruction. We present a reconstruction of the basins around New Zealand (Great South Basin, Bounty Trough, and New Caledonia Basin) based on crustal balancing, an approach that takes into account the rifting and thinning processes affecting continental crust. In a first step, we computed a crustal thickness map of Zealandia using seismic, seismological, and gravity data. The crustal thickness map shows the submarine plateaus to have a uniform crustal thickness of 20-24 km and the basins to have a thickness of 12-16 km. We assumed that a reconstruction of Zealandia should close the basins and lead to a most uniform crustal thickness. We used the standard deviation of the reconstructed crustal thickness as a measure of uniformity. The reconstruction of the Campbell Plateau area shows that the amount of extension in the Bounty Trough and the Great South Basin is far smaller than previously thought. Our results indicate that the extension of the Bounty Trough and Great South Basin occurred simultaneously.
Zhang, Shunpu; Li, Zhong; Beland, Kevin; Lu, Guoqing
2016-07-21
Clustering is a common technique used by molecular biologists to group homologous sequences and study evolution. There remain issues such as how to cluster molecular sequences accurately and in particular how to evaluate the certainty of clustering results. We presented a model-based clustering method to analyze molecular sequences, described a subset bootstrap scheme to evaluate a certainty of the clusters, and showed an intuitive way using 3D visualization to examine clusters. We applied the above approach to analyze influenza viral hemagglutinin (HA) sequences. Nine clusters were estimated for high pathogenic H5N1 avian influenza, which agree with previous findings. The certainty for a given sequence that can be correctly assigned to a cluster was all 1.0 whereas the certainty for a given cluster was also very high (0.92-1.0), with an overall clustering certainty of 0.95. For influenza A H7 viruses, ten HA clusters were estimated and the vast majority of sequences could be assigned to a cluster with a certainty of more than 0.99. The certainties for clusters, however, varied from 0.40 to 0.98; such certainty variation is likely attributed to the heterogeneity of sequence data in different clusters. In both cases, the certainty values estimated using the subset bootstrap method are all higher than those calculated based upon the standard bootstrap method, suggesting our bootstrap scheme is applicable for the estimation of clustering certainty. We formulated a clustering analysis approach with the estimation of certainties and 3D visualization of sequence data. We analysed 2 sets of influenza A HA sequences and the results indicate our approach was applicable for clustering analysis of influenza viral sequences.
Pipeline heating method based on optimal control and state estimation
Energy Technology Data Exchange (ETDEWEB)
Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu
2010-07-01
In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem
Katayanagi, Nobuko; Fumoto, Tamon; Hayano, Michiko; Shirato, Yasuhito; Takata, Yusuke; Leon, Ai; Yagi, Kazuyuki
2017-12-01
Methane (CH 4 ) is a greenhouse gas, and paddy fields are one of its main anthropogenic sources. In Japan, country-specific emission factors (EFs) have been applied since 2003 to estimate national-scale CH 4 emission from paddy field. However, these EFs did not consider the effects of factors that influence CH 4 emission (e.g., amount of organic C inputs, field drainage rate, climate) and can therefore produce estimates with high uncertainty. To improve the reliability of national-scale estimates, we revised the EFs based on simulations by the DeNitrification-DeComposition-Rice (DNDC-Rice) model in a previous study. Here, we estimated total CH 4 emission from paddy fields in Japan from 1990 to 2010 using these revised EFs and databases on independent variables that influence emission (organic C application rate, paddy area, proportions of paddy area for each drainage rate class and water management regime). CH 4 emission ranged from 323 to 455ktCyr -1 (1.1 to 2.2 times the range of 206 to 285ktCyr -1 calculated using previous EFs). Although our method may have overestimated CH 4 emissions, most of the abovementioned differences were presumably caused by underestimation by the previous method due to a lack of emission data from slow-drainage fields, lower organic C inputs than recent levels, neglect of regional climatic differences, and underestimation of the area of continuously flooded paddies. Our estimate (406ktC in 2000) was higher than that by the IPCC Tier 1 method (305ktC in 2000), presumably because regional variations in CH 4 emission rates are not accounted for by the Tier 1 method. Copyright © 2017 Elsevier B.V. All rights reserved.
Realized range-based estimation of integrated variance
DEFF Research Database (Denmark)
Christensen, Kim; Podolskij, Mark
2007-01-01
solve this problem to get a consistent, mixed normal estimator, irrespective of non-trading effects. This estimator has varying degrees of efficiency over realized variance, depending on how many observations that are used to construct the high-low. The methodology is applied to TAQ data and compared...
Direction of arrival estimation based on information geometry
Coutiño Minguez, M.A.; Pribic, R; Leus, G.J.T.; Dong, Min; Zheng, Thomas Fang
2016-01-01
In this paper, a new direction of arrival (DOA) estimation approach is devised using concepts from information geometry (IG). The proposed method uses geodesic distances in the statistical manifold of probability distributions parametrized by their covariance matrix to estimate the direction of
Speed estimator for induction motor drive based on synchronous ...
Indian Academy of Sciences (India)
sixteenth of time period is also proposed to enhance the speed of estimation. Computer simula- tion and experimentation on a 2.2 kW Field oriented controlled induction motor drive is carried out to verify the performance of proposed speed estimator.
Speed estimator for induction motor drive based on synchronous ...
Indian Academy of Sciences (India)
sixteenth of time period is also proposed to enhance the speed of estimation. Computer simulation and experimentation on a 2.2 kW Field oriented controlled induction motor drive is carried out to verify the performance of proposed speed estimator ...
Estimating security betas using prior information based on firm fundamentals
Cosemans, Mathijs; Frehen, Rik; Schotman, Peter; Bauer, Rob
We propose a hybrid approach for estimating beta that shrinks rolling window estimates towards firm-specific priors motivated by economic theory. Our method yields superior forecasts of beta that have important practical implications. First, hybrid betas carry a significant price of risk in the
ERP services effort estimation strategies based on early requirements
Erasmus, I.P.; Daneva, Maia; Kalenborg, Axel; Trapp, Marcus
2015-01-01
ERP clients and vendors necessarily estimate their project interventions at a very early stage, before the full requirements to an ERP solution are known and often before a contract is finalized between a vendor/ consulting company and a client. ERP project estimation at the stage of early
SNP based heritability estimation using a Bayesian approach
DEFF Research Database (Denmark)
Krag, Kristian; Janss, Luc; Mahdi Shariati, Mohammad
2013-01-01
. Differences in family structure were in general not found to influence the estimation of the heritability. For the sample sizes used in this study, a 10-fold increase of SNP density did not improve precision estimates compared with set-ups with a less dense distribution of SNPs. The methods used in this study...
Estimating Security Betas Using Prior Information Based on Firm Fundamentals
Cosemans, Mathijs; Frehen, Rik; Schotman, Peter; Bauer, Rob
2016-01-01
We propose a hybrid approach for estimating beta that shrinks rolling window estimates toward firm-specific priors motivated by economic theory. Our method yields superior forecasts of beta that have important practical implications. First, unlike standard rolling window betas, hybrid betas carry a
Satellite-based annual evaporation estimates of invasive alien plant ...
African Journals Online (AJOL)
The Surface Energy Balance Algorithm for Land (SEBAL) model, using MODIS satellite imagery, was used to estimate the annual total ET at 250 m pixel resolution. ET was estimated for 3 climatically different years for the Western Cape and KwaZulu-Natal. The average annual ET from areas under IAPs, native vegetation, ...
Weighted-noise threshold based channel estimation for OFDM ...
Indian Academy of Sciences (India)
Existing optimal time-domain thresholds exhibit suboptimal behavior for completely unavailable KCS environments. This is because they involve consistent estimation of one or more KCS parameters, and corresponding estimation errors introduce severe degradation in MSE performance of the CE. To overcome the MSE ...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
McGee, K P; Lake, D; Mariappan, Y; Manduca, A; Ehman, R L [Department of Radiology, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Hubmayr, R D [Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Mayo Clinic College of Medicine, 200 First Street, SW, Rochester, MN 55905 (United States); Ansell, K, E-mail: mcgee.kiaran@mayo.edu [Schaeffer Academy, 2700 Schaeffer Lane NE, Rochester, MN 55906 (United States)
2011-07-21
Magnetic resonance elastography (MRE) is a non-invasive phase-contrast-based method for quantifying the shear stiffness of biological tissues. Synchronous application of a shear wave source and motion encoding gradient waveforms within the MRE pulse sequence enable visualization of the propagating shear wave throughout the medium under investigation. Encoded shear wave-induced displacements are then processed to calculate the local shear stiffness of each voxel. An important consideration in local shear stiffness estimates is that the algorithms employed typically calculate shear stiffness using relatively high signal-to-noise ratio (SNR) MRE images and have difficulties at an extremely low SNR. A new method of estimating shear stiffness based on the principal spatial frequency of the shear wave displacement map is presented. Finite element simulations were performed to assess the relative insensitivity of this approach to decreases in SNR. Additionally, ex vivo experiments were conducted on normal rat lungs to assess the robustness of this approach in low SNR biological tissue. Simulation and experimental results indicate that calculation of shear stiffness by the principal frequency method is less sensitive to extremely low SNR than previously reported MRE inversion methods but at the expense of loss of spatial information within the region of interest from which the principal frequency estimate is derived.
Directory of Open Access Journals (Sweden)
Sergei Scherbov
2011-03-01
Full Text Available We study bias, standard errors, and distributions of characteristics of life tables for small populations. Theoretical considerations and simulations show that statistical efficiency of different methods is, above all, affected by the population size. Yet it is also significantly affected by the life table construction method and by a population's age composition. Study results are presented in the form of ready-to-use tables and relations, which may be useful in assessing the significance of estimates and differences in life expectancy across time and space for the territories with a small population size, when standard errors of life expectancy estimates may be high.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width......Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...
Estimation of an Examinee's Ability in the Web-Based Computerized Adaptive Testing Program IRT-CAT
Directory of Open Access Journals (Sweden)
Yoon-Hwan Lee
2006-11-01
Full Text Available We developed a program to estimate an examinee's ability in order to provide freely available access to a web-based computerized adaptive testing (CAT program. We used PHP and Java Script as the program languages, PostgresSQL as the database management system on an Apache web server and Linux as the operating system. A system which allows for user input and searching within inputted items and creates tests was constructed. We performed an ability estimation on each test based on a Rasch model and 2- or 3-parametric logistic models. Our system provides an algorithm for a web-based CAT, replacing previous personal computer-based ones, and makes it possible to estimate an examinee?占퐏 ability immediately at the end of test.
Evidence-based research: understanding the best estimate
Directory of Open Access Journals (Sweden)
Bauer JG
2016-09-01
Full Text Available Janet G Bauer,1 Sue S Spackman,2 Robert Fritz,2 Amanjyot K Bains,3 Jeanette Jetton-Rangel3 1Advanced Education Services, 2Division of General Dentistry, 3Center of Dental Research, Loma Linda University School of Dentistry, Loma Linda, CA, USA Introduction: Best estimates of intervention outcomes are used when uncertainties in decision making are evidenced. Best estimates are often, out of necessity, from a context of less than quality evidence or needing more evidence to provide accuracy. Purpose: The purpose of this article is to understand the best estimate behavior, so that clinicians and patients may have confidence in its quantification and validation. Methods: To discover best estimates and quantify uncertainty, critical appraisals of the literature, gray literature and its resources, or both are accomplished. Best estimates of pairwise comparisons are calculated using meta-analytic methods; multiple comparisons use network meta-analysis. Manufacturers provide margins of performance of proprietary material(s. Lower margin performance thresholds or requirements (functional failure of materials are determined by a distribution of tests to quantify performance or clinical competency. The same is done for the high margin performance thresholds (estimated true value of success and clinician-derived critical values (material failure to function clinically. This quantification of margins and uncertainties assists clinicians in determining if reported best estimates are progressing toward true value as new knowledge is reported. Analysis: The best estimate of outcomes focuses on evidence-centered care. In stochastic environments, we are not able to observe all events in all situations to know without uncertainty the best estimates of predictable outcomes. Point-in-time analyses of best estimates using quantification of margins and uncertainties do this. Conclusion: While study design and methodology are variables known to validate the quality of
Estimating mental fatigue based on electroencephalogram and heart rate variability
Zhang, Chong; Yu, Xiaolin
2010-01-01
The effects of long term mental arithmetic task on psychology are investigated by subjective self-reporting measures and action performance test. Based on electroencephalogram (EEG) and heart rate variability (HRV), the impacts of prolonged cognitive activity on central nervous system and autonomic nervous system are observed and analyzed. Wavelet packet parameters of EEG and power spectral indices of HRV are combined to estimate the change of mental fatigue. Then wavelet packet parameters of EEG which change significantly are extracted as the features of brain activity in different mental fatigue state, support vector machine (SVM) algorithm is applied to differentiate two mental fatigue states. The experimental results show that long term mental arithmetic task induces the mental fatigue. The wavelet packet parameters of EEG and power spectral indices of HRV are strongly correlated with mental fatigue. The predominant activity of autonomic nervous system of subjects turns to the sympathetic activity from parasympathetic activity after the task. Moreover, the slow waves of EEG increase, the fast waves of EEG and the degree of disorder of brain decrease compared with the pre-task. The SVM algorithm can effectively differentiate two mental fatigue states, which achieves the maximum classification accuracy (91%). The SVM algorithm could be a promising tool for the evaluation of mental fatigue. Fatigue, especially mental fatigue, is a common phenomenon in modern life, is a persistent occupational hazard for professional. Mental fatigue is usually accompanied with a sense of weariness, reduced alertness, and reduced mental performance, which would lead the accidents in life, decrease productivity in workplace and harm the health. Therefore, the evaluation of mental fatigue is important for the occupational risk protection, productivity, and occupational health.
Distributed estimation based on observations prediction in wireless sensor networks
Bouchoucha, Taha
2015-03-19
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.
Robust single-trial ERP estimation based on spatiotemporal filtering.
Li, Ruijiang; Principe, Jose C; Bradley, Margaret; Ferrari, Vera
2007-01-01
Most spatiotemporal filtering methods for the problem of single-trial event-related potentials (ERP) estimation relies on the analysis of the second-order statistics (SOS) of electroencephalograph (EEG) data. Due to the noisy nature of EEG, these methods often suffer from the outliers in EEG. We combine a recently proposed spatiotemporal filtering method with the maximum correntropy criterion (MCC) for the single-trial estimation of the ERP amplitude. Study with real cognitive ERP data shows the robustness of the method with reduced estimation variance.
Convolution-based estimation of organ dose in tube current modulated CT
Tian, Xiaoyu; Segars, W. P.; Dixon, R. L.; Samei, Ehsan
2015-03-01
Among the various metrics that quantify radiation dose in computed tomography (CT), organ dose is one of the most representative quantities reflecting patient-specific radiation burden.1 Accurate estimation of organ dose requires one to effectively model the patient anatomy and the irradiation field. As illustrated in previous studies, the patient anatomy factor can be modeled using a library of computational phantoms with representative body habitus.2 However, the modeling of irradiation field can be practically challenging, especially for CT exams performed with tube current modulation. The central challenge is to effectively quantify the scatter irradiation field created by the dynamic change of tube current. In this study, we present a convolution-based technique to effectively quantify the primary and scatter irradiation field for TCM examinations. The organ dose for a given clinical patient can then be rapidly determined using the convolution-based method, a patient-matching technique, and a library of computational phantoms. 58 adult patients were included in this study (age range: 18-70 y.o., weight range: 60-180 kg). One computational phantom was created based on the clinical images of each patient. Each patient was optimally matched against one of the remaining 57 computational phantoms using a leave-one-out strategy. For each computational phantom, the organ dose coefficients (CTDIvol-normalized organ dose) under fixed tube current were simulated using a validated Monte Carlo simulation program. Such organ dose coefficients were multiplied by a scaling factor, (CTDIvol )organ, convolution that quantifies the regional irradiation field. The convolution-based organ dose was compared with the organ dose simulated from Monte Carlo program with TCM profiles explicitly modeled on the original phantom created based on patient images. The estimation error was within 10% across all organs and modulation profiles for abdominopelvic examination. This strategy
Lightning NOx Estimates from Space-Based Lightning Imagers
Koshak, William J.
2017-01-01
The intense heating of air by a lightning channel, and subsequent rapid cooling, leads to the production of lightning nitrogen oxides (NOx = NO + NO2) as discussed in Chameides [1979]. In turn, the lightning nitrogen oxides (or "LNOx" for brevity) indirectly influences the Earth's climate because the LNOx molecules are important in controlling the concentration of ozone (O3) and hydroxyl radicals (OH) in the atmosphere. Climate is most sensitive to O3 in the upper troposphere, and LNOx is the most important source of NOx in the upper troposphere at tropical and subtropical latitudes; hence, lightning is a useful parameter to monitor for climate assessments. The National Climate Assessment (NCA) program was created in response to the Congressionally-mandated Global Change Research Act (GCRA) of 1990. Thirteen US government organizations participate in the NCA program which examines the effects of global change on the natural environment, human health and welfare, energy production and use, land and water resources, human social systems, transportation, agriculture, and biological diversity. The NCA focuses on natural and human-induced trends in global change, and projects major trends 25 to 100 years out. In support of the NCA, the NASA Marshall Space Flight Center (MSFC) continues to assess lightning-climate inter-relationships. This activity applies a variety of NASA assets to monitor in detail the changes in both the characteristics of ground- and space- based lightning observations as they pertain to changes in climate. In particular, changes in lightning characteristics over the conterminous US (CONUS) continue to be examined by this author using data from the Tropical Rainfall Measuring Mission Lightning Imaging Sensor. In this study, preliminary estimates of LNOx trends derived from TRMM/LIS lightning optical energy observations in the 17 yr period 1998-2014 are provided. This represents an important first step in testing the ability to make remote retrievals
Response Surface Model (RSM)-based Benefit Per Ton Estimates
The tables below are updated versions of the tables appearing in The influence of location, source, and emission type in estimates of the human health benefits of reducing a ton of air pollution (Fann, Fulcher and Hubbell 2009).
Trajectory-Based Operations (TBO) Cost Estimation, Phase I
National Aeronautics and Space Administration — The Innovation Laboratory, Inc., proposes to build a tool to estimate airline costs under TBO. This tool includes a cost model that explicitly reasons about traffic...
Estimating Track Capacity Based on Rail Stresses and Metal Fatigue.
2011-09-21
This paper describes a framework to evaluate the structural capacity of railroad track to train-induced loads. The framework is applied to estimate structural performance in terms of allowable limits for crosstie spacing. Evaluation of the load-carry...
Global stereo matching algorithm based on disparity range estimation
Li, Jing; Zhao, Hong; Gu, Feifei
2017-09-01
The global stereo matching algorithms are of high accuracy for the estimation of disparity map, but the time-consuming in the optimization process still faces a curse, especially for the image pairs with high resolution and large baseline setting. To improve the computational efficiency of the global algorithms, a disparity range estimation scheme for the global stereo matching is proposed to estimate the disparity map of rectified stereo images in this paper. The projective geometry in a parallel binocular stereo vision is investigated to reveal a relationship between two disparities at each pixel in the rectified stereo images with different baselines, which can be used to quickly obtain a predicted disparity map in a long baseline setting estimated by that in the small one. Then, the drastically reduced disparity ranges at each pixel under a long baseline setting can be determined by the predicted disparity map. Furthermore, the disparity range estimation scheme is introduced into the graph cuts with expansion moves to estimate the precise disparity map, which can greatly save the cost of computing without loss of accuracy in the stereo matching, especially for the dense global stereo matching, compared to the traditional algorithm. Experimental results with the Middlebury stereo datasets are presented to demonstrate the validity and efficiency of the proposed algorithm.
He, Tao; Liang, Shunlin; Wang, Dongdong; Chen, Xiaona; Song, Dan-Xia; Jiang, Bo
2015-01-01
Monitoring surface albedo at medium-to-fine resolution (<100 m) has become increasingly important for medium-to-fine scale applications and coarse-resolution data evaluation. This paper presents a method for estimating surface albedo directly using top-of-atmosphere reflectance. This is the first attempt to derive surface albedo for both snow-free and snow-covered conditions from medium-resolution data with a single approach. We applied this method to the multispectral data from the wide-...
International Nuclear Information System (INIS)
Tal, Balazs; Bencze, Attila; Zoletnik, Sandor; Veres, Gabor; Por, Gabor
2011-01-01
Time delay estimation methods (TDE) are well-known techniques to investigate poloidal flows in hot magnetized plasmas through the propagation properties of turbulent structures in the medium. One of these methods is based on the estimation of the time lag at which the cross-correlation function (CCF) estimation reaches its maximum value. The uncertainty of the peak location refers to the smallest determinable flow velocity modulation, and therefore the standard deviation of the time delay imposes important limitation to the measurements. In this article, the relative standard deviation of the CCF estimation and the standard deviation of its peak location are calculated analytically using a simple model of turbulent signals. This model assumes independent (non interacting) overlapping events (coherent structures) with randomly distributed spatio-temporal origins moving with background flow. The result of our calculations is the derivation of a general formula for the CCF variance, which is valid not exclusively in the high event density limit, but also for arbitrary event densities. Our formula reproduces the well known expression for high event densities previously published in the literature. In this paper we also present a derivation of the variance of time delay estimation that turns out to be inversely proportional to the applied time window. The derived formulas were tested in real plasma measurements. The calculations are an extension of the earlier work of Bencze and Zoletnik [Phys. Plasmas 12, 052323 (2005)] where the autocorrelation-width technique was developed. Additionally, we show that velocities calculated by a TDE method possess a broadband noise which originates from this variance, its power spectral density cannot be decreased by worsening the time resolution and can be coherent with noises of other velocity measurements where the same turbulent structures are used. This noise should not be confused with the impact of zero mean frequency zonal flow
Model-based estimation of finite population total in stratified sampling
African Journals Online (AJOL)
The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...
Method of estimation of cloud base height using ground-based digital stereophotography
Chulichkov, Alexey I.; Andreev, Maksim S.; Emilenko, Aleksandr S.; Ivanov, Victor A.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2015-11-01
Errors of the retrieval of the atmospheric composition using optical methods (DOAS et al.) are under the determining influence of the cloudiness during the measurements. Information on cloud characteristics helps to adjust the optical model of the atmosphere used to interpret the measurements and to reduce the retrieval errors are. For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The report describes a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras, describes method of searching of cloud fragments at another frame by the morphological image analysis; the problem of estimating the cloud base height is formulated and solved. Theoretical investigation shows that for the stereo base of 60 m and shooting with a resolution of 1600x1200 pixels in field of view of 60° the errors do not exceed 10% for the cloud base height up to 4 km. Optimization of camera settings can farther improve the accuracy. Available for authors experimental setup with the stereo base of 17 m and a resolution of 640x480 pixels preliminary confirmed theoretical estimations of the accuracy in comparison with laser rangefinder.
Directory of Open Access Journals (Sweden)
Nadine Schur
Full Text Available BACKGROUND: After many years of neglect, schistosomiasis control is going to scale. The strategy of choice is preventive chemotherapy, that is the repeated large-scale administration of praziquantel (a safe and highly efficacious drug to at-risk populations. The frequency of praziquantel administration is based on endemicity, which usually is defined by prevalence data summarized at an arbitrarily chosen administrative level. METHODOLOGY: For an ensemble of 29 West and East African countries, we determined the annualized praziquantel treatment needs for the school-aged population, adhering to World Health Organization guidelines. Different administrative levels of prevalence aggregation were considered; country, province, district, and pixel level. Previously published results on spatially explicit schistosomiasis risk in the selected countries were employed to classify each area into distinct endemicity classes that govern the frequency of praziquantel administration. PRINCIPAL FINDINGS: Estimates of infection prevalence adjusted for the school-aged population in 2010 revealed that most countries are classified as moderately endemic for schistosomiasis (prevalence 10-50%, while four countries (i.e., Ghana, Liberia, Mozambique, and Sierra Leone are highly endemic (>50%. Overall, 72.7 million annualized praziquantel treatments (50% confidence interval (CI: 68.8-100.7 million are required for the school-aged population if country-level schistosomiasis prevalence estimates are considered, and 81.5 million treatments (50% CI: 67.3-107.5 million if estimation is based on a more refined spatial scale at the provincial level. CONCLUSIONS/SIGNIFICANCE: Praziquantel treatment needs may be over- or underestimated depending on the level of spatial aggregation. The distribution of schistosomiasis in Ethiopia, Liberia, Mauritania, Uganda, and Zambia is rather uniform, and hence country-level risk estimates are sufficient to calculate treatment needs. On the
Sparse estimation of model-based diffuse thermal dust emission
Irfan, Melis O.; Bobin, Jérôme
2018-03-01
Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.
Connecting Satellite-Based Precipitation Estimates to Users
Huffman, George J.; Bolvin, David T.; Nelkin, Eric
2018-01-01
Beginning in 1997, the Merged Precipitation Group at NASA Goddard has distributed gridded global precipitation products built by combining satellite and surface gauge data. This started with the Global Precipitation Climatology Project (GPCP), then the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), and recently the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG). This 20+-year (and on-going) activity has yielded an important set of insights and lessons learned for making state-of-the-art precipitation data accessible to the diverse communities of users. Merged-data products critically depend on the input sensors and the retrieval algorithms providing accurate, reliable estimates, but it is also important to provide ancillary information that helps users determine suitability for their application. We typically provide fields of estimated random error, and recently reintroduced the quality index concept at user request. Also at user request we have added a (diagnostic) field of estimated precipitation phase. Over time, increasingly more ancillary fields have been introduced for intermediate products that give expert users insight into the detailed performance of the combination algorithm, such as individual merged microwave and microwave-calibrated infrared estimates, the contributing microwave sensor types, and the relative influence of the infrared estimate.
Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon
2010-10-01
An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.
Lee, Whanhee; Kim, Ho; Hwang, Sunghee; Zanobetti, Antonella; Schwartz, Joel D; Chung, Yeonseung
2017-09-07
Rich literature has reported that there exists a nonlinear association between temperature and mortality. One important feature in the temperature-mortality association is the minimum mortality temperature (MMT). The commonly used approach for estimating the MMT is to determine the MMT as the temperature at which mortality is minimized in the estimated temperature-mortality association curve. Also, an approximate bootstrap approach was proposed to calculate the standard errors and the confidence interval for the MMT. However, the statistical properties of these methods were not fully studied. Our research assessed the statistical properties of the previously proposed methods in various types of the temperature-mortality association. We also suggested an alternative approach to provide a point and an interval estimates for the MMT, which improve upon the previous approach if some prior knowledge is available on the MMT. We compare the previous and alternative methods through a simulation study and an application. In addition, as the MMT is often used as a reference temperature to calculate the cold- and heat-related relative risk (RR), we examined how the uncertainty in the MMT affects the estimation of the RRs. The previously proposed method of estimating the MMT as a point (indicated as Argmin2) may increase bias or mean squared error in some types of temperature-mortality association. The approximate bootstrap method to calculate the confidence interval (indicated as Empirical1) performs properly achieving near 95% coverage but the length can be unnecessarily extremely large in some types of the association. We showed that an alternative approach (indicated as Empirical2), which can be applied if some prior knowledge is available on the MMT, works better reducing the bias and the mean squared error in point estimation and achieving near 95% coverage while shortening the length of the interval estimates. The Monte Carlo simulation-based approach to estimate the
Degenerated-Inverse-Matrix-Based Channel Estimation for OFDM Systems
Directory of Open Access Journals (Sweden)
Makoto Yoshida
2009-01-01
Full Text Available This paper addresses time-domain channel estimation for pilot-symbol-aided orthogonal frequency division multiplexing (OFDM systems. By using a cyclic sinc-function matrix uniquely determined by Nc transmitted subcarriers, the performance of our proposed scheme approaches perfect channel state information (CSI, within a maximum of 0.4 dB degradation, regardless of the delay spread of the channel, Doppler frequency, and subcarrier modulation. Furthermore, reducing the matrix size by splitting the dispersive channel impulse response into clusters means that the degenerated inverse matrix estimator (DIME is feasible for broadband, high-quality OFDM transmission systems. In addition to theoretical analysis on normalized mean squared error (NMSE performance of DIME, computer simulations over realistic nonsample spaced channels also showed that the DIME is robust for intersymbol interference (ISI channels and fast time-invariant channels where a minimum mean squared error (MMSE estimator does not work well.
Novel multireceiver communication systems configurations based on optimal estimation theory
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
Directory of Open Access Journals (Sweden)
Davinia Font
2015-04-01
Full Text Available This paper presents a method for vineyard yield estimation based on the analysis of high-resolution images obtained with artificial illumination at night. First, this paper assesses different pixel-based segmentation methods in order to detect reddish grapes: threshold based, Mahalanobis distance, Bayesian classifier, linear color model segmentation and histogram segmentation, in order to obtain the best estimation of the area of the clusters of grapes in this illumination conditions. The color spaces tested were the original RGB and the Hue-Saturation-Value (HSV. The best segmentation method in the case of a non-occluded reddish table-grape variety was the threshold segmentation applied to the H layer, with an estimation error in the area of 13.55%, improved up to 10.01% by morphological filtering. Secondly, after segmentation, two procedures for yield estimation based on a previous calibration procedure have been proposed: (1 the number of pixels corresponding to a cluster of grapes is computed and converted directly into a yield estimate; and (2 the area of a cluster of grapes is converted into a volume by means of a solid of revolution, and this volume is converted into a yield estimate; the yield errors obtained were 16% and −17%, respectively.
Font, Davinia; Tresanchez, Marcel; Martínez, Dani; Moreno, Javier; Clotet, Eduard; Palacín, Jordi
2015-01-01
This paper presents a method for vineyard yield estimation based on the analysis of high-resolution images obtained with artificial illumination at night. First, this paper assesses different pixel-based segmentation methods in order to detect reddish grapes: threshold based, Mahalanobis distance, Bayesian classifier, linear color model segmentation and histogram segmentation, in order to obtain the best estimation of the area of the clusters of grapes in this illumination conditions. The color spaces tested were the original RGB and the Hue-Saturation-Value (HSV). The best segmentation method in the case of a non-occluded reddish table-grape variety was the threshold segmentation applied to the H layer, with an estimation error in the area of 13.55%, improved up to 10.01% by morphological filtering. Secondly, after segmentation, two procedures for yield estimation based on a previous calibration procedure have been proposed: (1) the number of pixels corresponding to a cluster of grapes is computed and converted directly into a yield estimate; and (2) the area of a cluster of grapes is converted into a volume by means of a solid of revolution, and this volume is converted into a yield estimate; the yield errors obtained were 16% and −17%, respectively. PMID:25860071
Memristor based crossbar memory array sneak path estimation
Naous, Rawan
2014-07-01
Gateless Memristor Arrays have the advantage of offering high density systems however; their main limitation is the current leakage or the sneak path. Several techniques have been used to address this problem, mainly concentrating on spatial and temporal solutions in setting a dynamic threshold. In this paper, a novel approach is used in terms of utilizing channel estimation and detection theory, primarily building on OFDM concepts of pilot settings, to actually benefit from prior read values in estimating the noise level and utilizing it to further enhance the reliability and accuracy of the read out process.
Topology-Based Estimation of Missing Smart Meter Readings
Directory of Open Access Journals (Sweden)
Daisuke Kodaira
2018-01-01
Full Text Available Smart meters often fail to measure or transmit the data they record when measuring energy consumption, known as meter readings, owing to faulty measuring equipment or unreliable communication modules. Existing studies do not address successive and non-periodical missing meter readings. This paper proposes a method whereby missing readings observed at a node are estimated by using circuit theory principles that leverage the voltage and current data from adjacent nodes. A case study is used to demonstrate the ability of the proposed method to successfully estimate the missing readings over an entire day during which outages and unpredictable perturbations occurred.
Line impedance estimation using model based identification technique
DEFF Research Database (Denmark)
Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus
2011-01-01
The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....
Deformation-Based Atrophy Estimation for Alzheimer’s Disease
DEFF Research Database (Denmark)
Pai, Akshay Sadananda Uppinakudru
Alzheimer’s disease (AD) - the most common form of dementia, is a term used for accelerated memory loss and cognitive abilities enough to severely hamper day-to-day activities. One of the most globally accepted markers for AD is atrophy, in mainly the brain parenchyma. The goal of the PhD project...... and a new way to estimate atrophy from a deformation field. We demonstrate the performance of the proposed solution but applying it on the publicly available Alzheimer’s disease neuroimaging data (ADNI) initiative and compare to existing state-of-art atrophy estimation methods....
Dungen, van den C.; Hoeymans, N.; Boshuizen, H.C.; Akker, van den M.; Biermans, M.C.; Boven, van K.; Brouwer, H.J.; Verheij, R.A.; Waal, de M.W.; Schellevis, F.G.; Westert, G.P.
2011-01-01
Background General practice based registration networks (GPRNs) provide information on morbidity rates in the population. Morbidity rate estimates from different GPRNs, however, reveal considerable, unexplained differences. We studied the range and variation in morbidity estimates, as well as the
Time-of-flight estimation based on covariance models
van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.
We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a
Estimation of toxicity using a Java based software tool
A software tool has been developed that will allow a user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be accessed using a web browser (or alternatively downloaded and ran as a stand alone applic...
Estimate-Merge-Technique-based algorithms to track an underwater ...
Indian Academy of Sciences (India)
D V A N Ravi Kumar
2017-07-04
Jul 4, 2017 ... Abstract. Bearing-only passive target tracking is a well-known underwater defence issue dealt in the recent past with the conventional nonlinear estimators like extended Kalman filter (EKF) and unscented Kalman filter. (UKF). It is being treated now-a-days with the derivatives of EKF, UKF and a highly ...
Estimating Outdoor Illumination Conditions Based on Detection of Dynamic Shadows
DEFF Research Database (Denmark)
Madsen, Claus B.; Lal, Brajesh Behari
2013-01-01
into the image stream to achieve realistic Augmented Reality where the shading and the shadowing of virtual objects is consistent with the real scene. Other techniques require the presence of a known object, a light probe, in the scene for estimating illumination. The technique proposed here works in general...
Estimation of methane generation based on anaerobic digestion ...
African Journals Online (AJOL)
... comparable (within 14%) to the amount estimated by laboratory-scale anaerobic digestion experiment (1.43 Gg methane/month). It is a worthwhile undertaking to further investigate the potential of commercially producing methane from Kiteezi landfill as an alternative source of green and clean energy for urban masses.
A Semantics-Based Approach to Construction Cost Estimating
Niknam, Mehrdad
2015-01-01
A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…
Robust estimators based on generalization of trimmed mean
Czech Academy of Sciences Publication Activity Database
Adam, Lukáš; Bejda, P.
(2018) ISSN 0361-0918 Institutional support: RVO:67985556 Keywords : Breakdown point * Estimators * Geometric median * Location * Trimmed mean Subject RIV: BA - General Mathematics Impact factor: 0.457, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/adam-0481224. pdf
Parameter extraction and estimation based on the PV panel outdoor ...
African Journals Online (AJOL)
This work presents a novel approach to predict the voltage-current (V-I) characteristics of a PV panel under varying weather conditions to estimate the PV parameters. Outdoor performance of the PV module (AP-PM-15) was carried out for several times. The experimental data obtained are validated and compared with the ...
Estimate-Merge-Technique-based algorithms to track an underwater ...
Indian Academy of Sciences (India)
... at the same time require much lesser number of computations than that of the PF, showing that these filters can serve as an optimal estimator. A testimony of the aforementioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in MATLAB R2009a for a typical war time scenario ...
The Performance of Estimators Based on the Propensity Score
Huber, M.; Lechner, M.; Wunsch, C.
2013-01-01
We investigate the finite sample properties of a large number of estimators for the average treatment effect on the treated that are suitable when adjustment for observed covariates is required, like inverse probability weighting, kernel and other variants of matching, as well as different
Removing label ambiguity in learning-based visual saliency estimation.
Li, Jia; Xu, Dong; Gao, Wen
2012-04-01
Visual saliency is a useful clue to depict visually important image/video contents in many multimedia applications. In visual saliency estimation, a feasible solution is to learn a "feature-saliency" mapping model from the user data obtained by manually labeling activities or eye-tracking devices. However, label ambiguities may also arise due to the inaccurate and inadequate user data. To process the noisy training data, we propose a multi-instance learning to rank approach for visual saliency estimation. In our approach, the correlations between various image patches are incorporated into an ordinal regression framework. By iteratively refining a ranking model and relabeling the image patches with respect to their mutual correlations, the label ambiguities can be effectively removed from the training data. Consequently, visual saliency can be effectively estimated by the ranking model, which can pop out real targets and suppress real distractors. Extensive experiments on two public image data sets show that our approach outperforms 11 state-of-the-art methods remarkably in visual saliency estimation.
Weighted-noise threshold based channel estimation for OFDM ...
Indian Academy of Sciences (India)
invariant within one OFDM symbol duration (Coleri et al 2002; Rosati et al. 2012). .... (KCS exact) noivar (KCS est). SOT (KCS est) weighted−noise. (KCS est). MMSE. Figure 1. MSE vs. SNR plots in ITU-TU6 with exact and estimated KCS.
Tobacco smuggling estimates based on pack examination in Ukraine
Directory of Open Access Journals (Sweden)
Tatiana I Andreeva
2017-05-01
Cigarette pack examination as a part of tobacco surveillance allows estimating the proportion of cigarettes brought from other countries, part of which could be smuggled. This information can be used for counterbalancing the industry's statements, which usually overestimate the level of cigarette smuggling.
Estimate-Merge-Technique-based algorithms to track an underwater ...
Indian Academy of Sciences (India)
D V A N Ravi Kumar
2017-07-04
Jul 4, 2017 ... named as Pre-Merge UKF and the other Post-Merge UKF, differ in the way the feedback to the individual UKFs is applied. These novel methods have an advantage of less root mean square estimation error in position and velocity compared with the EKF and UKF and at the same time require much lesser ...
Estimating heritability for cause specific mortality based on twin studies
DEFF Research Database (Denmark)
Scheike, Thomas; Holst, Klaus K.; Hjelmborg, Jacob B.
2014-01-01
be used to achieve sensible estimates of the dependence within monozygotic and dizygotic twin pairs that may vary over time. These dependence measures can subsequently be decomposed into a genetic and environmental component using random effects models. We here present several novel models that in essence...
Weighted-noise threshold based channel estimation for OFDM ...
Indian Academy of Sciences (India)
2016-08-26
Aug 26, 2016 ... Orthogonal frequency division multiplexing (OFDM) technology is the key to evolving telecommunication standards including 3GPP-LTE Advanced and WiMAX. Reliability of any OFDM system increases with improvedmean square error performance (MSE) of its channel estimator (CE). Particularly, a least ...
Estimation of methane generation based on anaerobic digestion ...
African Journals Online (AJOL)
Drake
This is due to inadequate data such as; estimated amount of refuse projected to be generated ... the main GHG liberated from landfills is a big threat to our environment, because its global warming potential is ..... Content in Controlling Landfill Gas Production and Its Application to a. Model for Landfill Refuse Decomposition.
Nutrient based estimation of acid-base balance in vegetarians and non-vegetarians.
Deriemaeker, Peter; Aerenhouts, Dirk; Hebbelinck, Marcel; Clarys, Peter
2010-03-01
A first objective of the present study was to estimate the acid-base balance of the food intake in vegetarians and non-vegetarians. A second objective was to evaluate if additional input of specific food items on the existing potential renal acid load (PRAL) list was necessary for the comparison of the two dietary patterns. Thirty vegetarians between the age of 18 and 30 years were matched for sex, age and BMI with 30 non-vegetarians. Based on the 3-days food diaries the acid-base status of the food intake was estimated using the PRAL method. Mean PRAL values as estimated with the standard table yielded an alkaline load of -5.4 +/- 14.4 mEq/d in the vegetarians compared to an acid load of 10.3 +/- 14.4 mEq/d in the nonvegetarians (pvegetarians compared to an acid load of 13.8 +/- 17.1 mEq/d for the non-vegetarians (pvegetarian food intake produces more alkaline outcomes compared to non-vegetarian diets. The use of the standard PRAL table was sufficient for discrimination between the two diets.
This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
DOA Estimation of Multiple LFM Sources Using a STFT-based and FBSS-based MUSIC Algorithm
Directory of Open Access Journals (Sweden)
K. B. Cui
2017-12-01
Full Text Available Direction of arrival (DOA estimation is an important problem in array signal processing. An effective multiple signal classification (MUSIC method based on the short-time Fourier transform (STFT and forward/ backward spatial smoothing (FBSS techniques for the DOA estimation problem of multiple time-frequency (t-f joint LFM sources is addressed. Previous work in the area e. g. STFT-MUSIC algorithm cannot resolve the t-f completely or largely joint sources because they can only select the single-source t-f points. The proposed method con¬structs the spatial t-f distributions (STFDs by selecting the multiple-source t-f points and uses the FBSS techniques to solve the problem of rank loss. In this way, the STFT-FBSS-MUSIC algorithm can resolve the t-f largely joint or completely joint LFM sources. In addition, the proposed algorithm also owns pretty low computational complexity when resolving multiple LFM sources because it can reduce the times of the feature decomposition and spectrum search. The performance of the proposed method is compared with that of the existing t-f based MUSIC algorithms through computer simulations and the results show its good performance.
A Study on Parametric Wave Estimation Based on Measured Ship Motions
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam; Iseki, Toshio
2011-01-01
The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics of the param......The paper studies parametric wave estimation based on the ‘wave buoy analogy’, and data and results obtained from the training ship Shioji-maru are compared with estimates of the sea states obtained from other measurements and observations. Furthermore, the estimating characteristics...... of the parametric model are discussed by considering the results of a similar estimation concept based on Bayesian modelling. The purpose of the latter comparison is not to favour the one estimation approach to the other but rather to highlight some of the advantages and disadvantages of the two approaches....
Complexity estimates based on integral transforms induced by computational units
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra
2012-01-01
Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012
Spectrum-based estimators of the bivariate Hurst exponent
Czech Academy of Sciences Publication Activity Database
Krištoufek, Ladislav
2014-01-01
Roč. 90, č. 6 (2014), art. 062802 ISSN 1539-3755 R&D Projects: GA ČR(CZ) GP14-11402P Institutional support: RVO:67985556 Keywords : bivariate Hurst exponent * power- law cross-correlations * estimation Subject RIV: AH - Economics Impact factor: 2.288, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kristoufek-0436818.pdf
Nonparametric Estimation of Information-Based Measures of Statistical Dispersion
Czech Academy of Sciences Publication Activity Database
Košťál, Lubomír; Pokora, Ondřej
2012-01-01
Roč. 14, č. 7 (2012), s. 1221-1233 ISSN 1099-4300 R&D Projects: GA ČR(CZ) GAP103/11/0282; GA ČR(CZ) GBP304/12/G069; GA ČR(CZ) GPP103/12/ P558 Institutional support: RVO:67985823 Keywords : statistical dispersion * entropy * Fisher information * nonparametric density estimation * neuronal activity Subject RIV: FH - Neurology Impact factor: 1.347, year: 2012
Model-Based Optimizing Control and Estimation Using Modelica Model
Directory of Open Access Journals (Sweden)
L. Imsland
2010-07-01
Full Text Available This paper reports on experiences from case studies in using Modelica/Dymola models interfaced to control and optimization software, as process models in real time process control applications. Possible applications of the integrated models are in state- and parameter estimation and nonlinear model predictive control. It was found that this approach is clearly possible, providing many advantages over modeling in low-level programming languages. However, some effort is required in making the Modelica models accessible to NMPC software.
Complementarity based a posteriori error estimates and their properties
Czech Academy of Sciences Publication Activity Database
Vejchodský, Tomáš
2012-01-01
Roč. 82, č. 10 (2012), s. 2033-2046 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : error majorant * a posteriori error estimates * method of hypercircle Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2012 http://www.sciencedirect.com/science/article/pii/S0378475411001509
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
National Aeronautics and Space Administration — Model-based prognostics approaches capture system knowl- edge in the form of physics-based models of components that include how they fail. These methods consist of...
Transformer winding temperature estimation based on tank surface temperature
Guo, Wenyu; Wijaya, Jaury; Martin, Daniel; Lelekakis, Nick
2011-04-01
Power transformers are among the most valuable assets of the electrical grid. Since the largest units cost in the order of millions of dollars, it is desirable to operate them in such a manner that extends their remaining lives. Operating these units at high temperature will cause excessive insulation ageing in the windings. Consequently, it is necessary to study the thermal performance of these expensive items. Measuring or estimating the winding temperature of power transformers is beneficial to a utility because this provides them with the data necessary to make informed decisions on how best to use their assets. Fiber optic sensors have recently become viable for the direct measurement of winding temperatures while a transformer is energized. However, it is only practical to install a fiber optic temperature sensor during the manufacture of a transformer. For transformers operating without fiber optic sensors, the winding temperature can be estimated with calculations using the temperature of the oil at the top of the transformer tank. When the oil temperature measurement is not easily available, the temperature of the tank surface may be used as an alternative. This paper shows how surface temperature may be utilized to estimate the winding temperature within a transformer designed for research purposes.
CSIR Research Space (South Africa)
Taylor
2015-03-01
Full Text Available Production and Soil Science, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa e-mail: nicolette.taylor@up.ac.za 2 Citrus Research International, P.O. Box 28, Nelspruit 1200, South Africa 3 CSIR – Natural Resources and the Environment... Crop coefficient approaches based on fixed estimates of leaf resistance are not appropriate for estimating water use of citrus N. J. Taylor 1 · W. Mahohoma1 · J. T. Vahrmeijer2. M. B. Gush3 · R. G. Allen4 · J. G. Annandale1 1 Department of Plant...
National Research Council Canada - National Science Library
D'Mello, Tiffany A; Yamane, Grover K
2007-01-01
.... Until recently, gender-specific weight standards based on height were in place. However, in June 2006 the USAF implemented a new set of height-weight limits utilizing body mass index (BMI) criteria...
Camera Coverage Estimation Based on Multistage Grid Subdivision
Directory of Open Access Journals (Sweden)
Meizhen Wang
2017-04-01
Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.
Porada, Philipp; Pöschl, Ulrich; Kleidon, Axel; Beer, Christian; Weber, Bettina
2017-04-01
Lichens and bryophytes have been shown to release significant amounts of nitrous oxide (N2O), which is a strong greenhouse gas and atmospheric ozone - depleting agent. Relative contributions of lichens and bryophytes to nitrous oxide emissions are largest in dryland and tundra regions, with potential implications for the nitrogen balance of these ecosystems. So far, this estimate is based on large-scale values of net primary productivity of lichens and bryophytes, which are derived from empirical upscaling of field measurements. Productivity is then converted to nitrous oxide emissions by empirical relationships between productivity and respiration, as well as respiration and nitrous oxide release. Alternatively, we quantify nitrous oxide emissions using a global process-based non-vascular vegetation model of lichens and bryophytes. The model simulates photosynthesis and respiration of lichens and bryophytes directly as a function of climatic conditions, such as light and temperature. Nitrous oxide emissions are then derived from simulated respiration, assuming a fixed relationship between the two fluxes, which is based on laboratory experiments under varying environmental conditions. Our approach yields a global estimate of 0.27 (0.19 - 0.35) Tg N2O yr-1 released by lichens and bryophytes. This is at the lower end of the range of a previous, empirical estimate, but corresponds to about 50 % of the atmospheric deposition of nitrous oxide into the oceans or 25 % of the atmospheric deposition on land. We conclude that, while productivity of lichens and bryophytes at large scale is relatively well constrained, improved estimates of their respiration may help to reduce uncertainty of predicted N2O emissions. This is particularly important for quantifying the spatial distribution of N2O emissions by lichens and bryophytes, since simulated respiration shows a different global pattern than productivity. We find that both physiological variation among species as well as
Estimation of piping temperature fluctuations based on external strain measurements
International Nuclear Information System (INIS)
Morilhat, P.; Maye, J.P.
1993-01-01
Due to the difficulty to carry out measurements at the inner sides of nuclear reactor piping subjected to thermal transients, temperature and stress variations in the pipe walls are estimated by means of external thermocouples and strain-gauges. This inverse problem is solved by spectral analysis. Since the wall harmonic transfer function (response to a harmonic load) is known, the inner side signal will be obtained by convolution of the inverse transfer function of the system and of the strain measurement enables detection of internal temperature fluctuations in a frequency range beyond the scope of the thermocouples. (authors). 5 figs., 3 refs
Evaporation estimation for Lake Nasser based on remote sensing technology
Hassan, Mohamed
2013-01-01
Lake Nasser in Upper Egypt is of a great importance for Egypt as it represents a large reservoir for the country’s freshwater resources. Precise studying of all elements contributing to the water balance of Lake Nasser is very crucial for better management of Egypt’s water resources. Evaporation is considered an important factor of the water balance system that causes a huge loss of the lake’s waters. In this study, evaporation rate for Lake Nasser is estimated using the surface energy balanc...
DEFF Research Database (Denmark)
Soliman, Hammam Abdelaal Hammam; Wang, Huai; Gadalla, Brwene Salah Abdelkarim
2015-01-01
challenges. A capacitance estimation method based on Artificial Neural Network (ANN) algorithm is therefore proposed in this paper. The implemented ANN estimated the capacitance of the DC-link capacitor in a back-toback converter. Analysis of the error of the capacitance estimation is also given...
Image-based aircraft pose estimation: a comparison of simulations and real-world data
Breuers, Marcel G. J.; de Reus, Nico
2001-10-01
The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.
Energy Technology Data Exchange (ETDEWEB)
Barut, Murat, E-mail: muratbarut27@yahoo.co [Nigde University, Department of Electrical and Electronics Engineering, 51245 Nigde (Turkey)
2010-10-15
This study offers a novel extended Kalman filter (EKF) based estimation technique for the solution of the on-line estimation problem related to uncertainties in the stator and rotor resistances inherent to the speed-sensorless high efficiency control of induction motors (IMs) in the wide speed range as well as extending the limited number of states and parameter estimations possible with a conventional single EKF algorithm. For this aim, the introduced estimation technique in this work utilizes a single EKF algorithm with the consecutive execution of two inputs derived from the two individual extended IM models based on the stator resistance and rotor resistance estimation, differently from the other approaches in past studies, which require two separate EKF algorithms operating in a switching or braided manner; thus, it has superiority over the previous EKF schemes in this regard. The proposed EKF based estimation technique performing the on-line estimations of the stator currents, the rotor flux, the rotor angular velocity, and the load torque involving the viscous friction term together with the rotor and stator resistance is also used in the combination with the speed-sensorless direct vector control of IM and tested with simulations under the challenging 12 scenarios generated instantaneously via step and/or linear variations of the velocity reference, the load torque, the stator resistance, and the rotor resistance in the range of high and zero speed, assuming that the measured stator phase currents and voltages are available. Even under those variations, the performance of the speed-sensorless direct vector control system established on the novel EKF based estimation technique is observed to be quite good.
A History-based Estimation for LHCb job requirements
Rauschmayr, Nathalie
2015-01-01
The main goal of a Workload Management System (WMS) is to find and allocate resources for the given tasks. The more and better job information the WMS receives, the easier will be to accomplish its task, which directly translates into higher utilization of resources. Traditionally, the information associated with each job, like expected runtime, is defined beforehand by the Production Manager in best case and fixed arbitrary values by default. In the case of LHCb's Workload Management System no mechanisms are provided which automate the estimation of job requirements. As a result, much more CPU time is normally requested than actually needed. Particularly, in the context of multicore jobs this presents a major problem, since single- and multicore jobs shall share the same resources. Consequently, grid sites need to rely on estimations given by the VOs in order to not decrease the utilization of their worker nodes when making multicore job slots available. The main reason for going to multicore jobs is the red...
Observer-Based Fault Estimation and Accomodation for Dynamic Systems
Zhang, Ke; Shi, Peng
2013-01-01
Due to the increasing security and reliability demand of actual industrial process control systems, the study on fault diagnosis and fault tolerant control of dynamic systems has received considerable attention. Fault accommodation (FA) is one of effective methods that can be used to enhance system stability and reliability, so it has been widely and in-depth investigated and become a hot topic in recent years. Fault detection is used to monitor whether a fault occurs, which is the first step in FA. On the basis of fault detection, fault estimation (FE) is utilized to determine online the magnitude of the fault, which is a very important step because the additional controller is designed using the fault estimate. Compared with fault detection, the design difficulties of FE would increase a lot, so research on FE and accommodation is very challenging. Although there have been advancements reported on FE and accommodation for dynamic systems, the common methods at the present stage have design difficulties, whi...
ARIMA based Value Estimation in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
A. Amidi
2014-10-01
Full Text Available Due to the widespread inaccuracy of wireless sensor networks (WSNs data, it is essential to ensure that the data is as complete, clean and precise as possible. To address data gaps and replace erroneous data, temporal correlation modelling can be applied, which takes advantage of temporal correlation and is also energy efficient. In this research, the suitability of adapting the ARIMA model into a WSN context is scrutinized, as technological requirements demand special considerations. The necessity of applying a smoothing technique is explored and the selection of an appropriate method is determined. Additionally, the available options with regards to ARIMA set-up are discussed, in terms of achieving accurate and energy friendly predictions. The effect of sufficient historical data and the importance of predictions’ life span on the estimation accuracy are additionally investigated. Finally, an adaptive, online and energy efficient system is proposed for maintaining the accuracy of the model that simultaneously detects outliers and events as well as substitutes any missing or erroneous data with estimated values.
Estimations of climate sensitivity based on top-of-atmosphere radiation imbalance
Directory of Open Access Journals (Sweden)
B. Lin
2010-02-01
Full Text Available Large climate feedback uncertainties limit the accuracy in predicting the response of the Earth's climate to the increase of CO_{2} concentration within the atmosphere. This study explores a potential to reduce uncertainties in climate sensitivity estimations using energy balance analysis, especially top-of-atmosphere (TOA radiation imbalance. The time-scales studied generally cover from decade to century, that is, middle-range climate sensitivity is considered, which is directly related to the climate issue caused by atmospheric CO_{2} change. The significant difference between current analysis and previous energy balance models is that the current study targets at the boundary condition problem instead of solving the initial condition problem. Additionally, climate system memory and deep ocean heat transport are considered. The climate feedbacks are obtained based on the constraints of the TOA radiation imbalance and surface temperature measurements of the present climate. In this study, the TOA imbalance value of 0.85 W/m^{2} is used. Note that this imbalance value has large uncertainties. Based on this value, a positive climate feedback with a feedback coefficient ranging from −1.3 to −1.0 W/m^{2}/K is found. The range of feedback coefficient is determined by climate system memory. The longer the memory, the stronger the positive feedback. The estimated time constant of the climate is large (70~120 years mainly owing to the deep ocean heat transport, implying that the system may be not in an equilibrium state under the external forcing during the industrial era. For the doubled-CO_{2} climate (or 3.7 W/m^{2} forcing, the estimated global warming would be 3.1 K if the current estimate of 0.85 W/m^{2} TOA net radiative heating could be confirmed. With accurate long-term measurements of TOA radiation, the analysis method suggested by this study provides a great potential in the
Fogleman, Nicholas D; Naaz, Farah; Knight, Lindsay K; Stoica, Teodora; Patton, Samantha C; Olson-Madden, Jennifer H; Barnhart, Meghan C; Hostetter, Trisha A; Forster, Jeri; Brenner, Lisa A; Banich, Marie T; Depue, Brendan E
2017-09-30
Post-traumatic stress disorder (PTSD) and mild traumatic brain injury (mTBI) are two of the most common consequences of combat deployment. Estimates of comorbidity of PTSD and mTBI are as high as 42% in combat exposed Operation Enduring Freedom, Operation Iraqi Freedom and Operation New Dawn (OEF/OIF/OND) Veterans. Combat deployed Veterans with PTSD and/or mTBI exhibit deficits in classic executive function (EF) tasks. Similarly, the extant neuroimaging literature consistently indicates abnormalities of the ventromedial prefrontal cortex (vmPFC) and amygdala/hippocampal complex in these individuals. While studies examining deficits in classical EF constructs and aberrant neural circuitry have been widely replicated, it is surprising that little research examining reward processing and decision-making has been conducted in these individuals, specifically, because the vmPFC has long been implicated in underlying such processes. Therefore, the current study employed the modified Iowa Gambling Task (mIGT) and structural neuroimaging to assess whether behavioral measures related to reward processing and decision-making were compromised and related to cortical morphometric features of OEF/OIF/OND Veterans with PTSD, mTBI, or co-occurring PTSD/mTBI. Results indicated that gray matter morphometry in the lateral prefrontal cortex (lPFC) predicted performance on the mIGT among all three groups and was significantly reduced, as compared to the control group. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Nauheimer, Lars; Metzler, Dirk; Renner, Susanne S
2012-09-01
The family Araceae (3790 species, 117 genera) has one of the oldest fossil records among angiosperms. Ecologically, members of this family range from free-floating aquatics (Pistia and Lemna) to tropical epiphytes. Here, we infer some of the macroevolutionary processes that have led to the worldwide range of this family and test how the inclusion of fossil (formerly occupied) geographical ranges affects biogeographical reconstructions. Using a complete genus-level phylogeny from plastid sequences and outgroups representing the 13 other Alismatales families, we estimate divergence times by applying different clock models and reconstruct range shifts under different models of past continental connectivity, with or without the incorporation of fossil locations. Araceae began to diversify in the Early Cretaceous (when the breakup of Pangea was in its final stages), and all eight subfamilies existed before the K/T boundary. Early lineages persist in Laurasia, with several relatively recent entries into Africa, South America, South-East Asia and Australia. Water-associated habitats appear to be ancestral in the family, and DNA substitution rates are especially high in free-floating Araceae. Past distributions inferred when fossils are included differ in nontrivial ways from those without fossils. Our complete genus-level time-scale for the Araceae may prove to be useful for ecological and physiological studies. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.
Directory of Open Access Journals (Sweden)
Michael F Sloma
2017-11-01
Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.
Sloma, Michael F; Mathews, David H
2017-11-01
Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based
Erkut, Selim; Küçükesmen, Hakki Cenker; Eminkahyagil, Neslihan; Imirzalioglu, Pervin; Karabulut, Erdem
2007-01-01
This study evaluated the effect of two different types of provisional luting agents (RelyX Temp E, eugenol-based; RelyX Temp NE, eugenol-free) on the shear bond strengths between human dentin and two different resin-based luting systems (RelyXARC-Single Bond and Duo Link-One Step) after cementation with two different techniques (dual bonding and conventional technique). One hundred human molars were trimmed parallel to the original long axis, to expose flat dentin surfaces, and were divided into three groups. After related surface treatments for each specimen, the resin-based luting agent was applied in a silicone cylindrical mold (3.5 x 4 mm), placed on the bonding-agent-treated dentin surfaces and polymerized. In the control group (n = 20), the specimens were further divided into two groups (n = 10), and two different resin-based luting systems were immediately applied following the manufacturer's protocols: RelyX ARC-Single Bond (Group I C) and Duo Link-One Step (Group II C). In the provisionalization group (n = 40), the specimens were further divided into four subgroups of 10 specimens each (Group I N, I E and Group II N, II E). In Groups I N and II N, eugenol-free (RelyX NE), and in groups I E and II E, eugenol-based (RelyX E) provisional luting agents (PLA), were applied on the dentin surface. The dentin surfaces were cleaned with a flour-free pumice, and the resin-based luting systems RelyX ARC (Group I N and E) and Duo Link (Group II N and E) were applied. In the Dual bonding groups (n = 40), the specimens were divided into four subgroups of 10 specimens each (Group I ND, ED and Group II ND, ED). The specimens were treated with Single Bond (Groups I ND and ED) or One Step (Groups II ND and ED). After the dentin bonding agent treatment, RelyX Temp NE was applied to Groups I ND and II ND, and RelyX Temp E was applied to Groups I ED and II ED. The dentin surfaces were then cleaned as described in the provisionalization group, and the resin-based luting systems
Hénin, Emilie; Tod, Michel; Trillet-Lenoir, Véronique; Rioufol, Catherine; Tranchand, Brigitte; Girard, Pascal
2009-01-01
More and more anticancer chemotherapies are now available as oral formulations. This relatively new route of administration in oncology leads to problems with patient education and non-compliance. The aim of this study was to explore the performances of the 'inverse problem', namely, estimation of compliance from pharmacokinetics. For this purpose, we developed and evaluated a method to estimate patient compliance with an oral chemotherapy in silico (i) from an a priori population pharmacokinetic model; (ii) with limited optimal pharmacokinetic information collected on day 1; and (iii) from a single pharmacokinetic sample collected after multiple doses. Population pharmacokinetic models, including estimation of all fixed and random effects estimated on a prior dataset, and sparse samples taken after the first dose, were combined to provide the individual POSTHOC Bayesian pharmacokinetic parameter estimates. Sampling times on day 1 were chosen according to a D-optimal design. Individual pharmacokinetic profiles were simulated according to various dose-taking scenarios. To characterize compliance over the n previous dosing times (supposedly known without error), 2n different compliance scenarios of doses taken/not taken were considered. The observed concentration value was compared with concentrations predicted from the model and each compliance scenario. To discriminate between different compliance profiles, we used the Euclidean distance between the observed pharmacokinetic values and the predicted values simulated without residual errors. This approach was evaluated in silico and applied to imatinib and capecitabine, the pharmacokinetics of which are described in the literature, and which have quite different pharmacokinetic characteristics (imatinib has an elimination half-life of 17 hours, and alpha-fluoro-beta-alanine [FBAL], the metabolite of capecitabine, has an elimination half-life of 3 hours). 1000 parameter sets were drawn according to population
Multiphase flow parameter estimation based on laser scattering
International Nuclear Information System (INIS)
Vendruscolo, Tiago P; Fischer, Robert; Martelli, Cicero; Da Silva, Marco J; Rodrigues, Rômulo L P; Morales, Rigoberto E M
2015-01-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time. (paper)
Multiphase flow parameter estimation based on laser scattering
Vendruscolo, Tiago P.; Fischer, Robert; Martelli, Cicero; Rodrigues, Rômulo L. P.; Morales, Rigoberto E. M.; da Silva, Marco J.
2015-07-01
The flow of multiple constituents inside a pipe or vessel, known as multiphase flow, is commonly found in many industry branches. The measurement of the individual flow rates in such flow is still a challenge, which usually requires a combination of several sensor types. However, in many applications, especially in industrial process control, it is not necessary to know the absolute flow rate of the respective phases, but rather to continuously monitor flow conditions in order to quickly detect deviations from the desired parameters. Here we show how a simple and low-cost sensor design can achieve this, by using machine-learning techniques to distinguishing the characteristic patterns of oblique laser light scattered at the phase interfaces. The sensor is capable of estimating individual phase fluxes (as well as their changes) in multiphase flows and may be applied to safety applications due to its quick response time.
A Fast DOA Estimation Algorithm Based on Polarization MUSIC
Directory of Open Access Journals (Sweden)
R. Guo
2015-04-01
Full Text Available A fast DOA estimation algorithm developed from MUSIC, which also benefits from the processing of the signals' polarization information, is presented. Besides performance enhancement in precision and resolution, the proposed algorithm can be exerted on various forms of polarization sensitive arrays, without specific requirement on the array's pattern. Depending on the continuity property of the space spectrum, a huge amount of computation incurred in the calculation of 4-D space spectrum is averted. Performance and computation complexity analysis of the proposed algorithm is discussed and the simulation results are presented. Compared with conventional MUSIC, it is indicated that the proposed algorithm has considerable advantage in aspects of precision and resolution, with a low computation complexity proportional to a conventional 2-D MUSIC.
METHODICAL BASES OF ESTIMATION GLOMERULAR FILTRATION RATE IN UROLOGICAL PRACTICE
Directory of Open Access Journals (Sweden)
M. M. Batiushin
2017-01-01
Full Text Available The article presents a review of methodological issues of estimation of glomerular filtration rate in urologic practice. Author examine the current international and national recommendations, in particular by KDIGO, the recommendations of the scientific society of nephrologists of Russia, Association of urologists of Russia, the results of comparative analysis of different methods of assessing glomerular filtration rate. It is shown that the currently calculated methods of assessment of glomerular filtration rate have advantages over technique of clearance. The advantages and disadvantages of methods for calculating glomerular filtration rate by the formula of Cockcroft-Gault and MDRD. The author lists the pathological conditions in urological practice, in which there is a need to assess glomerular filtration rate, given nomograms and links to online calculators for quick and easy calculation of glomerular filtration rate.
Robust Homography Estimation Based on Nonlinear Least Squares Optimization
Directory of Open Access Journals (Sweden)
Wei Mou
2014-01-01
Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.
State-Estimation Algorithm Based on Computer Vision
Bayard, David; Brugarolas, Paul
2007-01-01
An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.
Estimating spacecraft attitude based on in-orbit sensor measurements
DEFF Research Database (Denmark)
Jakobsen, Britt; Lyn-Knudsen, Kevin; Mølgaard, Mathias
2014-01-01
, attitude and rotational velocity, the EKF proves to be robust against noisy or lacking sensor data. It is apparent from the comparison of noise parameters from Earth and space, that an EKF tuned using Earth measurements of sensor variances will attain an acceptable performance when operated in Low Earth...... from a controlled environment on Earth as well as in-orbit. By using sensor noise parameters obtained on Earth as the expected parameters in the attitude estimation, and simulating the environment using the sensor noise parameters from space, it is possible to assess whether the EKF can be designed...... solely on Earth or whether an in-orbit tuning/update of the algorithm is needed. of the EKF. Generally, sensor noise variances are larger in the in-orbit measurements than in the measurements obtained on ground. From Monte Carlo simulations with varying settings of the satellite inertia and initial time...
Fingerstroke time estimates for touchscreen-based mobile gaming interaction.
Lee, Ahreum; Song, Kiburm; Ryu, Hokyoung Blake; Kim, Jieun; Kwon, Gyuhyun
2015-12-01
The growing popularity of gaming applications and ever-faster mobile carrier networks have called attention to an intriguing issue that is closely related to command input performance. A challenging mirroring game service, which simultaneously provides game service to both PC and mobile phone users, allows them to play games against each other with very different control interfaces. Thus, for efficient mobile game design, it is essential to apply a new predictive model for measuring how potential touch input compares to the PC interfaces. The present study empirically tests the keystroke-level model (KLM) for predicting the time performance of basic interaction controls on the touch-sensitive smartphone interface (i.e., tapping, pointing, dragging, and flicking). A modified KLM, tentatively called the fingerstroke-level model (FLM), is proposed using time estimates on regression models. Copyright © 2015 Elsevier B.V. All rights reserved.
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
Abattoir-based estimates of mycobacterial infections in Cameroon.
Egbe, N F; Muwonge, A; Ndip, L; Kelly, R F; Sander, M; Tanya, V; Ngwa, V Ngu; Handel, I G; Novak, A; Ngandalo, R; Mazeri, S; Morgan, K L; Asuquo, A; Bronsvoort, B M de C
2016-04-14
Mycobacteria cause major diseases including human tuberculosis, bovine tuberculosis and Johne's disease. In livestock, the dominant species is M. bovis causing bovine tuberculosis (bTB), a disease of global zoonotic importance. In this study, we estimated the prevalence of Mycobacteria in slaughter cattle in Cameroon. A total of 2,346 cattle were examined in a cross-sectional study at four abattoirs in Cameroon. Up to three lesions per animal were collected for further study and a retropharyngeal lymph node was collected from a random sample of non-lesioned animals. Samples were cultured on Lowenstein Jensen media and the BACTEC MGIT 960 system, and identified using the Hain® Genotype kits. A total of 207/2,346 cattle were identified with bTB-like lesions, representing 4.0% (45/1,129), 11.3% (106/935), 23.8% (38/160) and 14.8% (18/122) of the cattle in the Bamenda, Ngaoundere, Garoua and Maroua abattoirs respectively. The minimum estimated prevalence of M. bovis was 2.8% (1.9-3.9), 7.7% (6.1-9.6), 21.3% (15.2-28.4) and 13.1% (7.7-20.4) in the four abattoirs respectively. One M. tuberculosis and three M. bovis strains were recovered from non-lesioned animals. The high prevalence of M. bovis is of public health concern and limits the potential control options in this setting without a viable vaccine as an alternative.
Estimation model for evaporative emissions from gasoline vehicles based on thermodynamics.
Hata, Hiroo; Yamada, Hiroyuki; Kokuryo, Kazuo; Okada, Megumi; Funakubo, Chikage; Tonokura, Kenichi
2018-03-15
In this study, we conducted seven-day diurnal breathing loss (DBL) tests on gasoline vehicles. We propose a model based on the theory of thermodynamics that can represent the experimental results of the current and previous studies. The experiments were performed using 14 physical parameters to determine the dependence of total emissions on temperature, fuel tank fill, and fuel vapor pressure. In most cases, total emissions after an apparent breakthrough were proportional to the difference between minimum and maximum environmental temperatures during the day, fuel tank empty space, and fuel vapor pressure. Volatile organic compounds (VOCs) were measured using a Gas Chromatography Mass Spectrometer and Flame Ionization Detector (GC-MS/FID) to determine the Ozone Formation Potential (OFP) of after-breakthrough gas emitted to the atmosphere. Using the experimental results, we constructed a thermodynamic model for estimating the amount of evaporative emissions after a fully saturated canister breakthrough occurred, and a comparison between the thermodynamic model and previous models was made. Finally, the total annual evaporative emissions and OFP in Japan were determined and compared by each model. Copyright © 2017 Elsevier B.V. All rights reserved.
Further Rehabilitating CIV-based Black Hole Mass Estimates in Quasars
Brotherton, Michael S.; Runnoe, Jessie C.; Shang, Zhaohui; Varju, Melinda
2016-06-01
Virial black hole masses are routinely estimated for high-redshift quasars using the C IV lambda 1549 emission line using single-epoch spectra that provide a gas velocity and a continuum luminosity. Such masses are very uncertain, however, especially because C IV likely possesses a non-virial component that varies with the Eddington ratio. We have previously used the 1400 feature, a blend of S i IV and O IV] emission that does not suffer the problems of C IV, to rehabilitate C IV-based mases by providing a correction term. The C IV profile itself, however, provides enough information to correct the black hole masses and remove the effects of the non-virial component. We use Mg II-based black hole masses to calibrate and test a new C IV-based black hole mass formula using only C IV and continuum measurements superior to existing formulations, as well as to test for additional dependencies on luminosity.
Markov Chain-Based Acute Effect Estimation of Air Pollution on Elder Asthma Hospitalization
Directory of Open Access Journals (Sweden)
Li Luo
2017-01-01
Full Text Available Background. Asthma caused substantial economic and health care burden and is susceptible to air pollution. Particularly, when it comes to elder asthma patient (older than 65, the phenomenon is more significant. The aim of this study is to investigate the Markov-based acute effects of air pollution on elder asthma hospitalizations, in forms of transition probabilities. Methods. A retrospective, population-based study design was used to assess temporal patterns in hospitalizations for asthma in a region of Sichuan province, China. Approximately 12 million residents were covered during this period. Relative risk analysis and Markov chain model were employed on daily hospitalization state estimation. Results. Among PM2.5, PM10, NO2, and SO2, only SO2 was significant. When air pollution is severe, the transition probability from a low-admission state (previous day to high-admission state (next day is 35.46%, while it is 20.08% when air pollution is mild. In particular, for female-cold subgroup, the counterparts are 30.06% and 0.01%, respectively. Conclusions. SO2 was a significant risk factor for elder asthma hospitalization. When air pollution worsened, the transition probabilities from each state to high admission states increase dramatically. This phenomenon appeared more evidently, especially in female-cold subgroup (which is in cold season for female admissions. Based on our work, admission amount forecast, asthma intervention, and corresponding healthcare allocation can be done.
Passivity-based control and estimation in networked robotics
Hatanaka, Takeshi; Fujita, Masayuki; Spong, Mark W
2015-01-01
Highlighting the control of networked robotic systems, this book synthesizes a unified passivity-based approach to an emerging cross-disciplinary subject. Thanks to this unified approach, readers can access various state-of-the-art research fields by studying only the background foundations associated with passivity. In addition to the theoretical results and techniques, the authors provide experimental case studies on testbeds of robotic systems including networked haptic devices, visual robotic systems, robotic network systems and visual sensor network systems. The text begins with an introduction to passivity and passivity-based control together with the other foundations needed in this book. The main body of the book consists of three parts. The first examines how passivity can be utilized for bilateral teleoperation and demonstrates the inherent robustness of the passivity-based controller against communication delays. The second part emphasizes passivity’s usefulness for visual feedback control ...
U.S. Environmental Protection Agency — Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression...
Kroenig, Malte; Schaal, Kathrin; Benndorf, Matthias; Soschynski, Martin; Lenz, Philipp; Krauss, Tobias; Drendel, Vanessa; Kayser, Gian; Kurz, Philipp; Werner, Martin; Wetterauer, Ulrich; Schultze-Seemann, Wolfgang; Langer, Mathias; Jilg, Cordula A
2016-01-01
Objective . In this study, we compared prostate cancer detection rates between MRI-TRUS fusion targeted and systematic biopsies using a robot-guided, software based transperineal approach. Methods and Patients . 52 patients received a MRIT/TRUS fusion followed by a systematic volume adapted biopsy using the same robot-guided transperineal approach. The primary outcome was the detection rate of clinically significant disease (Gleason grade ≥ 4). Secondary outcomes were detection rate of all cancers, sampling efficiency and utility, and serious adverse event rate. Patients received no antibiotic prophylaxis. Results . From 52 patients, 519 targeted biopsies from 135 lesions and 1561 random biopsies were generated (total n = 2080). Overall detection rate of clinically significant PCa was 44.2% (23/52) and 50.0% (26/52) for target and random biopsy, respectively. Sampling efficiency as the median number of cores needed to detect clinically significant prostate cancer was 9 for target (IQR: 6-14.0) and 32 (IQR: 24-32) for random biopsy. The utility as the number of additionally detected clinically significant PCa cases by either strategy was 0% (0/52) for target and 3.9% (2/52) for random biopsy. Conclusions . MRI/TRUS fusion based target biopsy did not show an advantage in the overall detection rate of clinically significant prostate cancer.
Petrarca, Mateus H; Fernandes, José O; Godoy, Helena T; Cunha, Sara C
2016-12-01
With the aim to develop a new gas chromatography-mass spectrometry method to analyze 24 pesticide residues in baby foods at the level imposed by established regulation two simple, rapid and environmental-friendly sample preparation techniques based on QuEChERS (quick, easy, cheap, effective, robust and safe) were compared - QuEChERS with dispersive liquid-liquid microextraction (DLLME) and QuEChERS with dispersive solid-phase extraction (d-SPE). Both sample preparation techniques achieved suitable performance criteria, including selectivity, linearity, acceptable recovery (70-120%) and precision (⩽20%). A higher enrichment factor was observed for DLLME and consequently better limits of detection and quantification were obtained. Nevertheless, d-SPE provided a more effective removal of matrix co-extractives from extracts than DLLME, which contributed to lower matrix effects. Twenty-two commercial fruit-based baby food samples were analyzed by the developed method, being procymidone detected in one sample at a level above the legal limit established by EU. Copyright © 2016 Elsevier Ltd. All rights reserved.
Addressing Single and Multiple Bad Data in the Modern PMU-based Power System State Estimation
DEFF Research Database (Denmark)
Khazraj, Hesam; Silva, Filipe Miguel Faria da; Bak, Claus Leth
2017-01-01
Detection and analysis of bad data is an important sector of the static state estimation. This paper addresses single and multiple bad data in the modern phasor measurement unit (PMU)-based power system static state estimations. To accomplish this objective, available approaches in the PMU-based ...
Statistical inference for remote sensing-based estimates of net deforestation
Ronald E. McRoberts; Brian F. Walters
2012-01-01
Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...
Estimating Global Impervious Surface based on Social-economic Data and Satellite Observations
Zeng, Z.; Zhang, K.; Xue, X.; Hong, Y.
2016-12-01
Impervious surface areas around the globe are expanding and significantly altering the surface energy balance, hydrology cycle and ecosystem services. Many studies have underlined the importance of impervious surface, r from hydrological modeling to contaminant transport monitoring and urban development estimation. Therefore accurate estimation of the global impervious surface is important for both physical and social sciences. Given the limited coverage of high spatial resolution imagery and ground survey, using satellite remote sensing and geospatial data to estimate global impervious areas is a practical approach. Based on the previous work of area-weighted imperviousness for north branch of the Chicago River provided by HDR, this study developed a method to determine the percentage of impervious surface using latest global land cover categories from multi-source satellite observations, population density and gross domestic product (GDP) data. Percent impervious surface at 30-meter resolution were mapped. We found that 1.33% of the CONUS (105,814 km2) and 0.475% of the land surface (640,370km2) are impervious surfaces. To test the utility and practicality of the proposed method, National Land Cover Database (NLCD) 2011 percent developed imperviousness for the conterminous United States was used to evaluate our results. The average difference between the derived imperviousness from our method and the NLCD data across CONUS is 1.14%, while difference between our results and the NLCD data are within ±1% over 81.63% of the CONUS. The distribution of global impervious surface map indicates that impervious surfaces are primarily concentrated in China, India, Japan, USA and Europe where are highly populated and/or developed. This study proposes a straightforward way of mapping global imperviousness, which can provide useful information for hydrologic modeling and other applications.
International Nuclear Information System (INIS)
Yap, J.T.; Chen, C.T.; Cooper, M.
1995-01-01
The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)
Lyapunov Based Estimation of Flight Stability Boundary under Icing Conditions
Directory of Open Access Journals (Sweden)
Binbin Pei
2017-01-01
Full Text Available Current fight boundary of the envelope protection in icing conditions is usually defined by the critical values of state parameters; however, such method does not take the interrelationship of each parameter and the effect of the external disturbance into consideration. This paper proposes constructing the stability boundary of the aircraft in icing conditions through analyzing the region of attraction (ROA around the equilibrium point. Nonlinear icing effect model is proposed according to existing wind tunnel test results. On this basis, the iced polynomial short period model can be deduced further to obtain the stability boundary under icing conditions using ROA analysis. Simulation results for a series of icing severity demonstrate that, regardless of the icing severity, the boundary of the calculated ROA can be treated as an estimation of the stability boundary around an equilibrium point. The proposed methodology is believed to be a promising way for ROA analysis and stability boundary construction of the aircraft in icing conditions, and it will provide theoretical support for multiple boundary protection of icing tolerant flight.
Geomagnetic matching navigation algorithm based on robust estimation
Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan
2017-08-01
The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.
ESTIMATION OF BIOMASS POTENTIAL BASED ON CLASSIFICATION AND HEIGHT INFORMATION
Directory of Open Access Journals (Sweden)
S. Müller
2013-05-01
Full Text Available On the way to make energy supply independent from fossil resources more and more renewable energy sources have to be explored. Biomass has become an important energy resource during the last years and the consumption is rising steadily. Common sources of biomass are agricultural production and forestry but the production of these sources is stagnating due to limited space. To explore new sources of biomass like in the field of landscape conservation the location and available amount of biomass is unknown. Normally, there are no reliable data sources to give information about the objects of interest such as hedges, vegetation along streets, railways and rivers, field margins and ruderal sites. There is a great demand for an inventory of these biomass sources which could be answered by applying remote sensing technology. As biomass objects considered here are sometimes only a few meters wide, spectral unmixing is applied to separate different material mixtures reflected in one image pixel. The spectral images are assumed to have a spatial resolution of 5–20 m with multispectral or hyperspectral band configurations. Combining the identified material part fractions with height information and GIS data afterwards will give estimates about the location of biomass objects. The method is applied to test data of a Sentinel-2 simulation and the results are evaluated visually.
Error bounds for surface area estimators based on Crofton's formula
DEFF Research Database (Denmark)
Kiderlen, Markus; Meschenmoser, Daniel
2009-01-01
According to Crofton’s formula, the surface area S(A) of a sufficiently regular compact set A in R^d is proportional to the mean of all total projections pA (u) on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u) is only measured in k directio...... in the sense that the relative error of the surface area estimator is very close to the minimal error....... and the mean is approximated by a finite weighted sum S(A) of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule), and show that the relative error...... S (A)/S (A) is bounded from below by the inradius of Z and from above by the circumradius of Z. Applying a strengthened isoperimetric inequality due to Bonnesen, we show that the rectangular quadrature rule does not give the best possible error bounds for d = 2. In addition, we derive asymptotic...
Bernstein approximations in glasso-based estimation of biological networks
Purutcuoglu, Vilda; Agraz, Melih; Wit, Ernst
The Gaussian graphical model (GGM) is one of the common dynamic modelling approaches in the construction of gene networks. In inference of this modelling the interaction between genes can be detected mainly via graphical lasso (glasso) or coordinate descent-based approaches. Although these methods
Statistical amplitude scale estimation for quantization-based watermarking
Shterev, I.D.; Lagendijk, I.L.; Heusdens, R.
2004-01-01
Quantization-based watermarking schemes are vulnerable to amplitude scaling. Therefore the scaling factor has to be accounted for either at the encoder, or at the decoder, prior to watermark decoding. In this paper we derive the marginal probability density model for the watermarked and attacked
Time-based position estimation in monolithic scintillator detectors
Tabacchini, V.; Borghi, G.; Schaart, D.R.
2015-01-01
Gamma-ray detectors based on bright monolithic scintillation crystals coupled to pixelated photodetectors are currently being considered for several applications in the medical imaging field. In a typical monolithic detector, both the light intensity and the time of arrival of the earliest
Agent-based Security and Efficiency Estimation in Airport Terminals
Janssen, S.A.M.
We investigate the use of an Agent-based framework to identify and quantify the relationship between security and efficiency within airport terminals. In this framework, we define a novel Security Risk Assessment methodology that explicitly models attacker and defender behavior in a security
Estimation of incidences of infectious diseases based on antibody measurements
DEFF Research Database (Denmark)
Simonsen, J; Mølbak, K; Falkenhorst, G
2009-01-01
Owing to under-ascertainment it is difficult if not impossible to determine the incidence of a given disease based on cases notified to routine public health surveillance. This is especially true for diseases that are often present in mild forms as for example diarrhoea caused by foodborne...
Satellite-based estimation of rainfall erosivity for Africa
Vrieling, A.; Sterk, G.; Jong, S.M. de
2010-01-01
Rainfall erosivity is a measure for the erosive force of rainfall. Rainfall kinetic energy determines the erosivity and is in turn greatly dependent on rainfall intensity. Attempts for its large-scale mapping are rare. Most are based on interpolation of erosivity values derived from rain gauge
Estimation and applications of size-based distributions in forestry
Jeffrey H. Gove
2003-01-01
Size-based distributions arise in several contexts in forestry and ecology. Simple power relationships (e.g., basal area and diameter at breast height) between variables are one such area of interest arising from a modeling perspective. Another, probability proportional to size sampline (PPS), is found in the most widely used methods for sampling standing or dead and...
Crushing damage estimation for pavement with lightly cementitious bases
CSIR Research Space (South Africa)
De Beer, Morris
2014-07-01
Full Text Available the 1990s. Typically, crushing failure starts at the surface of the cementitious base layer, and could extend to 50 mm deep, depending on tyre load/stress conditions. Recently developed crushing damage relationships for 2, 5, 10, 15 and 20 mm level...
Index cost estimate based BIM method - Computational example for sports fields
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
Setoguchi, Yasuhiro; Izumi, Shinyu; Nakamura, Hidenori; Hanada, Shigeo; Marumo, Kazuyoshi; Kurosaki, Atsuko; Akata, Shouichi
2015-01-01
To investigate the potential beneficial effects of guideline-based pharmacological therapy on pulmonary function and quality of life (QOL) in Japanese chronic obstructive pulmonary disease (COPD) patients without prior treatment. Multicenter survey, open-label study of 49 Japanese COPD patients aged ≥ 40 years; outpatients with >10 pack years of smoking history; ratio of forced expiratory volume in 1 s (FEV1)/forced vital capacity (FVC) patients. Significant changes over time were not observed for FEV1 and FVC, indicating lung function at initiation of treatment was maintained during the observation period. COPD assessment test scores showed statistical and clinical improvements. Cough, sputum, breathlessness, and shortness of breath were significantly improved. Lung function and QOL of untreated Japanese COPD patients improved and improvements were maintained by performing a therapeutic intervention that conformed to published guidelines.
Estimation of Critical Gap Based on Raff's Definition
Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang
2014-01-01
Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles’ arrival of major stream and vehicles’ arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff’s definition of critical gap, two calculation models are derived, which are named M3 def...
Directory of Open Access Journals (Sweden)
Andrew C. Elton
2017-01-01
Full Text Available Salmonella meningitis is a rare manifestation of meningitis typically presenting in neonates and the elderly. This infection typically associates with foodborne outbreaks in developing nations and AIDS-endemic regions. We report a case of a 19-year-old male presenting with altered mental status after 3-day absence from work at a Wisconsin tourist area. He was febrile, tachycardic, and tachypneic with a GCS of 8. The patient was intubated and a presumptive diagnosis of meningitis was made. Treatment was initiated with ceftriaxone, vancomycin, acyclovir, dexamethasone, and fluid resuscitation. A lumbar puncture showed cloudy CSF with Gram negative rods. He was admitted to the ICU. CSF culture confirmed Salmonella enterica subsp. I (enterica Enteritidis (A. Based on this finding, a 4th-generation HIV antibody/p24 antigen test was sent. When this returned positive, a CD4 count was obtained and showed 3 cells/mm3, confirming AIDS. The patient ultimately received 38 days of ceftriaxone, was placed on elvitegravir, cobicistat, emtricitabine, and tenofovir alafenamide (Genvoya for HIV/AIDS, and was discharged neurologically intact after a 44-day admission.
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.
Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin
2010-05-18
The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments
Directory of Open Access Journals (Sweden)
Demeter Lisa
2010-05-01
Full Text Available Abstract Background The replication rate (or fitness between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV. HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models, a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1. Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.
River bathymetry estimation based on the floodplains topography.
Bureš, Luděk; Máca, Petr; Roub, Radek; Pech, Pavel; Hejduk, Tomáš; Novák, Pavel
2017-04-01
Topographic model including River bathymetry (bed topography) is required for hydrodynamic simulation, water quality modelling, flood inundation mapping, sediment transport, ecological and geomorphologic assessments. The most common way to create the river bathymetry is to use of the spatial interpolation of discrete points or cross sections data. The quality of the generated bathymetry is dependent on the quality of the measurements, on the used technology and on the size of input dataset. Extensive measurements are often time consuming and expensive. Other option for creating of the river bathymetry is to use the methods of mathematical modelling. In the presented contribution we created the river bathymetry model. Model is based on the analytical curves. The curves are bent into shape of the cross sections. For the best description of the river bathymetry we need to know the values of the model parameters. For finding these parameters we use of the global optimization methods. The global optimization schemes is based on heuristics inspired by the natural processes. We use new type of DE (differential evolution) for finding the solutions of inverse problems, related to the parameters of mathematical model of river bed surfaces. The presented analysis discuss the dependence of model parameters on the selected characteristics. Selected characteristics are: (1) Topographic characteristics (slope and curvature in the left and right floodplains) determined on the base of DTM 5G (digital terrain model). (2) Optimization scheme. (3) Type of used analytical curves. The novel approach is applied on the three parts of Vltava river in Czech Republic. Each part of the river is described on the base of the point field. The point fields was measured with ADCP probe River surveyor M9. This work was supported by the Technology Agency of the Czech Republic, programme Alpha (project TA04020042 - New technologies bathymetry of rivers and reservoirs to determine their storage
Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation
Directory of Open Access Journals (Sweden)
Jaehoon Jung
2016-01-01
Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.
ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro
2014-01-01
ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356
Numerical Model based Reliability Estimation of Selective Laser Melting Process
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Temporal overlap estimation based on interference spectrum in CARS microscopy
Zhang, Yongning; Jiang, Junfeng; Liu, Kun; Huang, Can; Wang, Shuang; Zhang, Xuezhi; Liu, Tiegen
2018-01-01
Coherent Anti-Stokes Raman Scattering (CARS) microscopy has attracted lots of attention because of the advantages, such as noninvasive, label-free, chemical specificity, intrinsic three-dimension spatial resolution and so on. However, the temporal overlap of pump and Stokes has not been solved owing to the ultrafast optical pulse used in CARS microscopy. We combine interference spectrum of residual pump in Stokes path and nonlinear Schrodinger equation (NLSE) to realize the temporal overlap of pump pulse and Stokes pulse. At first, based on the interference spectrum of pump pulse and residual pump in Stokes path, the optical delay is defined when optical path difference between pump path and Stokes path is zero. Then the relative optical delay between Stokes pulse and residual pump in PCF can be calculated by NLSE. According to the spectrum interference and NLSE, temporal overlap of pump pulse and Stokes pulse will be realized easily and the imaging speed will be improved in CARS microscopy.
Improving Radium-based Estimates of Submarine Groundwater Discharge
Hughes, A. L.; Wilson, A. M.
2011-12-01
Groundwater discharge is vital for the exchange of solutes between salt marshes and estuaries, and radium isotopes are frequently used as tracers of groundwater flow paths and discharge in coastal systems. Considerable spatial and temporal variability in porewater radium activity has hindered the accuracy of this tracer. In porewater, radium activity is a complex function of production by parent isotopes in and grain size of the aquifer material, individual decay rates, porewater salinity, temperature, redox- and pH-dependent adsorption and desorption, sediment Fe- and Mn-oxide/hydroxide coatings, and groundwater transport (advection and dispersion). In order to resolve the primary factors controlling porewater radium activity in an intertidal salt marsh, where high salinity and reducing conditions prevail, and sediment oxide coatings vary from winter to summer, a field and modeling study was conducted at a salt marsh island within North Inlet Salt Marsh, Georgetown, South Carolina. This site was previously developed as part of a larger study to understand the links between salt marsh groundwater dynamics and acute marsh dieback. Porewater and surface water samples were collected from November 2009 - February 2011. Shallow sediment samples were collected in winter and summer 2010, and deeper sediments were split from cores collected during site development. Measurements of water temperature, salinity, mV, and pH were taken in the field, and radium isotopes were measured via delayed-coincidence counter or gamma spectrometry. Surface-bound sediment radium activity was determined by desorption experiments. Iron and manganese oxide coatings on surface sediments were isolated through a sequential leaching process, and the leachate analyzed via ICP-AES. Finally, a 3-D groundwater flow model was developed using SUTRA, a U.S.G.S. numerical model, which was modified to account for changes in total stress resulting from tidal loading of the marsh surface and for complex
Otsuka, J; Kawai, Y; Sugaya, N
2001-11-21
In most studies of molecular evolution, the nucleotide base at a site is assumed to change with the apparent rate under functional constraint, and the comparison of base changes between homologous genes is thought to yield the evolutionary distance corresponding to the site-average change rate multiplied by the divergence time. However, this view is not sufficiently successful in estimating the divergence time of species, but mostly results in the construction of tree topology without a time-scale. In the present paper, this problem is investigated theoretically by considering that observed base changes are the results of comparing the survivals through selection of mutated bases. In the case of weak selection, the time course of base changes due to mutation and selection can be obtained analytically, leading to a theoretical equation showing how the selection has influence on the evolutionary distance estimated from the enumeration of base changes. This result provides a new method for estimating the divergence time more accurately from the observed base changes by evaluating both the strength of selection and the mutation rate. The validity of this method is verified by analysing the base changes observed at the third codon positions of amino acid residues with four-fold codon degeneracy in the protein genes of mammalian mitochondria; i.e. the ratios of estimated divergence times are fairly well consistent with a series of fossil records of mammals. Throughout this analysis, it is also suggested that the mutation rates in mitochondrial genomes are almost the same in different lineages of mammals and that the lineage-specific base-change rates indicated previously are due to the selection probably arising from the preference of transfer RNAs to codons.
A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.
Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao
2015-11-24
In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity.
RELIABLE ORIENTATION FIELD ESTIMATION OF FINGERPRINT BASED ON ADAPTIVE NEIGHBORHOOD ANALYSIS
Shoba Dyre; C P Sumathi
2017-01-01
Fingerprint Orientation estimation is an important step in feature extraction and classification. However, a reliable extraction of fingerprint orientation data is still a challenge for poor quality images. In this paper, a gradient based estimation of orientation field based on the analysis of orientation consistency in the neighborhood for regularizing the orientation field is proposed. Experimental results are analyzed and compared with other existing gradient based methods used in this wo...
Bayesian estimation of faults geometry based on seismic catalog data
Holschneider, M.; Ben-Zion, Y.
2006-12-01
The geometrical properties of fault structures are important to many issues, including seismic hazard. The resolution of fault geometry at depth with waveform data is very difficult, and in many situations the only source of information are seismic catalogs, i.e. the origin times, magnitudes and hypocenter locations (with location errors) of earthquakes. In this work we discuss a method based on Bayesian inversion of catalog data to calculate the likelihood that the hypocenters in a given area reside on various narrow tabular zones. We scan the space of all possible zones by using a Hesse (normal form) representation of the possible fault zones. This allows us to compute an a posteriori probability density in the space of fault zones given the catalog data. An additional refinement of the method consists in using "an exclusion principle" where an earthquake prevents new events to happen for some time in a disk region with size related to the magnitude of the earlie r earthquake. The time dependency of this exclusion region models the healing and post-seismic stress increase of a slipping zone. We discuss the theoretical properties and resolution power of theses inversion procedures with the help of synthetic data.
Estimation of critical gap based on Raff's definition.
Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang
2014-01-01
Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.
Chamidah, Nur; Rifada, Marisa
2016-03-01
There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.
Directory of Open Access Journals (Sweden)
Sheng Bi
2016-03-01
Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.
International Nuclear Information System (INIS)
Ozturk, H.K.; Canyurt, O.E.; Hepbasli, A.; Utlu, Z.
2004-01-01
The main objective of the present study is to develop the energy input estimation equations for the residential-commercial sector (RCS) in order to estimate the future projections based on genetic algorithm (GA) notion and to examine the effect of the design parameters on the energy input of the sector. For this purpose, the Turkish RCS is given as an example. The GA Energy Input Estimation Model (GAEIEM) is used to estimate Turkey's future residential-commercial energy input demand based on gross domestic product (GDP), population, import, export, house production, cement production and basic house appliances consumption figures. It may be concluded that the three various forms of models proposed here can be used as an alternative solution and estimation techniques to available estimation techniques. It is also expected that this study will be helpful in developing highly applicable and productive planning for energy policies. (author)
Uncertainty quantification of phase-based motion estimation on noisy sequence of images
Sarrafi, Aral; Mao, Zhu
2017-04-01
Optical measurement and motion estimation based on the acquired sequence of images is one of the most recent sensing techniques developed in the last decade or so. As a modern non-contact sensing technique, motion estimation and optical measurements provide a full-field awareness without any mass loading or change of stiffness in structures, which is unavoidable using other conventional transducers (e.g. accelerometers, strain gauges, and LVDTs). Among several motion estimation techniques prevalent in computer vision, phase-based motion estimation is one of the most reliable and accurate methods. However, contamination of the sequence of images with numerous sources of noise is inevitable, and the performance of the phase-based motion estimation could be affected due to the lighting changes, image acquisition noise, and the camera's intrinsic sensor noise. Within this context, the uncertainty quantification (UQ) of the phase-based motion estimation (PME) has been investigated in this paper. Based on a normality assumption, a framework has been provided in order to characterize the propagation of the uncertainty from the acquired images to the estimated motion. The established analytical solution is validated via Monte-Carlo simulations using a set of simulation data. The UQ model in the paper is able to predict the order statistics of the noise influence, in which the uncertainty bounds of the estimated motion are given, after processing the contaminated sequence of images.
Estimating variability in placido-based topographic systems.
Kounis, George A; Tsilimbaris, Miltiadis K; Kymionis, George D; Ginis, Harilaos S; Pallikaris, Ioannis G
2007-10-01
To describe a new software tool for the detailed presentation of corneal topography measurements variability by means of color-coded maps. Software was developed in Visual Basic to analyze and process a series of 10 consecutive measurements obtained by a topographic system on calibration spheres, and individuals with emmetropic, low, high, and irregular astigmatic corneas. Corneal surface was segmented into 1200 segments and the coefficient of variance of each segment's keratometric dioptric power was used as the measure of variability. The results were presented graphically in color-coded maps (Variability Maps). Two topographic systems, the TechnoMed C-Scan and the TOMEY Topographic Modeling System (TMS-2N), were examined to demonstrate our method. Graphic representation of coefficient of variance offered a detailed representation of examination variability both in calibration surfaces and human corneas. It was easy to recognize an increase in variability, as the irregularity of examination surfaces increased. In individuals with high and irregular astigmatism, a variability pattern correlated with the pattern of corneal topography: steeper corneal areas possessed higher variability values compared with flatter areas of the same cornea. Numerical data permitted direct comparisons and statistical analysis. We propose a method that permits a detailed evaluation of the variability of corneal topography measurements. The representation of the results both graphically and quantitatively improves interpretability and facilitates a spatial correlation of variability maps with original topography maps. Given the popularity of topography based custom refractive ablations of the cornea, it is possible that variability maps may assist clinicians in the evaluation of corneal topography maps of patients with very irregular corneas, before custom ablation procedures.
Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar
Directory of Open Access Journals (Sweden)
Zhenxin Cao
2018-02-01
Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.
Stand-scale soil respiration estimates based on chamber methods in a Bornean tropical rainforest
Kume, T.; Katayama, A.; Komatsu, H.; Ohashi, M.; Nakagawa, M.; Yamashita, M.; Otsuki, K.; Suzuki, M.; Kumagai, T.
2009-12-01
This study was undertaken to estimate stand-scale soil respiration in an aseasonal tropical rainforest on Borneo Island. To this aim, we identified critical and practical factors explaining spatial variations in soil respiration based on the soil respiration measurements conducted at 25 points in a 40 × 40 m subplot of a 4 ha study plot for five years in relation to soil, root, and forest structural factors. Consequently, we found significant positive correlation between the soil respiration and forest structural parameters. The most important factor was the mean DBH within 6 m of the measurement points, which had a significant linear relationship with soil respiration. Using the derived linear regression and an inventory dataset, we estimated the 4 ha-scale soil respiration. The 4 ha-scale estimation (6.0 μmol m-2 s-1) was nearly identical to the subplot scale measurements (5.7 μmol m-2 s-1), which were roughly comparable to the nocturnal CO2 fluxes calculated using the eddy covariance technique. To confirm the spatial representativeness of soil respiration estimates in the subplot, we performed variogram analysis. Semivariance of DBH(6) in the 4 ha plot showed that there was autocorrelation within the separation distance of about 20 m, and that the spatial dependence was unclear at a separation distance of greater than 20 m. This ascertained that the 40 × 40 m subplot could represent the whole forest structure in the 4 ha plot. In addition, we discuss characteristics of the stand-scale soil respiration at this site by comparing with those of other forests reported in previous literature in terms of the soil C balance. Soil respiration at our site was noticeably greater, relative to the incident litterfall amount, than soil respiration in other tropical and temperate forests probably owing to the larger total belowground C allocation by emergent trees. Overall, this study suggests the arrangement of emergent trees and their bellow ground C allocation could be
Hongkui Li; Tongli Lu; Jianwu Zhang
2016-01-01
This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.
International Nuclear Information System (INIS)
Kövesárki, P; Brock, I C; Quiroz, A E Nuncio
2012-01-01
This paper introduces a probability density estimator based on Green's function identities. A density model is constructed under the sole assumption that the probability density is differentiable. The method is implemented as a binary likelihood estimator for classification purposes, so issues such as mis-modeling and overtraining are also discussed. The identity behind the density estimator can be interpreted as a real-valued, non-scalar kernel method which is able to reconstruct differentiable density functions.
Tensor-Based Methods for Blind Spatial Signature Estimation in Multidimensional Sensor Arrays
Gomes, P.R.B.; Almeida, A.L.F. de; Costa, J.P.C.L. da; Mota, J.C.M.; Lima, D.V. de; Galdo, G. del
2017-01-01
The estimation of spatial signatures and spatial frequencies is crucial for several practical applications such as radar, sonar, and wireless communications. In this paper, we propose two generalized iterative estimation algorithms to the case in which a multidimensional (R-D) sensor array is used at the receiver. The first tensor-based algorithm is an R-D blind spatial signature estimator that operates in scenarios where the source's covariance matrix is nondiagonal and unknown. The second t...
Zhang, Ke; Jiang, Bin; Shi, Peng
2017-02-01
In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.
Multi-view space object recognition and pose estimation based on kernel regression
Directory of Open Access Journals (Sweden)
Zhang Haopeng
2014-10-01
Full Text Available The application of high-performance imaging sensors in space-based space surveillance systems makes it possible to recognize space objects and estimate their poses using vision-based methods. In this paper, we proposed a kernel regression-based method for joint multi-view space object recognition and pose estimation. We built a new simulated satellite image dataset named BUAA-SID 1.5 to test our method using different image representations. We evaluated our method for recognition-only tasks, pose estimation-only tasks, and joint recognition and pose estimation tasks. Experimental results show that our method outperforms the state-of-the-arts in space object recognition, and can recognize space objects and estimate their poses effectively and robustly against noise and lighting conditions.
Karbalaee, Negar; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan
2017-04-01
This study explores using Passive Microwave (PMW) rainfall estimation for spatial and temporal adjustment of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The PERSIANN-CCS algorithm collects information from infrared images to estimate rainfall. PERSIANN-CCS is one of the algorithms used in the Integrated Multisatellite Retrievals for GPM (Global Precipitation Mission) estimation for the time period PMW rainfall estimations are limited or not available. Continued improvement of PERSIANN-CCS will support Integrated Multisatellite Retrievals for GPM for current as well as retrospective estimations of global precipitation. This study takes advantage of the high spatial and temporal resolution of GEO-based PERSIANN-CCS estimation and the more effective, but lower sample frequency, PMW estimation. The Probability Matching Method (PMM) was used to adjust the rainfall distribution of GEO-based PERSIANN-CCS toward that of PMW rainfall estimation. The results show that a significant improvement of global PERSIANN-CCS rainfall estimation is obtained.
Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity
Pellikaan, P.; van der Krogt, Marjolein; Carbone, Vincenzo; Fluit, René; Vigneron, L.M.; van Deun, J.; Verdonschot, Nicolaas Jacobus Joseph; Koopman, Hubertus F.J.M.
2014-01-01
To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based
Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals. The m...
Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques
DEFF Research Database (Denmark)
Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung
2012-01-01
In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...
Image-based change estimation for land cover and land use monitoring
Jeremy Webb; C. Kenneth Brewer; Nicholas Daniels; Chris Maderia; Randy Hamilton; Mark Finco; Kevin A. Megown; Andrew J. Lister
2012-01-01
The Image-based Change Estimation (ICE) project resulted from the need to provide estimates and information for land cover and land use change over large areas. The procedure uses Forest Inventory and Analysis (FIA) plot locations interpreted using two different dates of imagery from the National Agriculture Imagery Program (NAIP). In order to determine a suitable...
Yuan, Ke-Hai; Bentler, Peter M.
2002-01-01
Examined the asymptotic distributions of three reliability coefficient estimates: (1) sample coefficient alpha; (2) reliability estimate of a composite score following factor analysis; and (3) maximal reliability of a linear combination of item scores after factor analysis. Findings show that normal theory based asymptotic distributions for these…
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Estimating canopy bulk density and canopy base height for interior western US conifer stands
Seth A. Ex; Frederick W. Smith; Tara L. Keyser; Stephanie A. Rebain
2016-01-01
Crown fire hazard is often quantified using effective canopy bulk density (CBD) and canopy base height (CBH). When CBD and CBH are estimated using nonlocal crown fuel biomass allometries and uniform crown fuel distribution assumptions, as is common practice, values may differ from estimates made using local allometries and nonuniform...
Posture estimation of a space object base on line reconstruction from stereo images
Shang, Ke; Sun, Xiao; Tian, Jinwen; Ming, Delie
2015-12-01
This paper proposes a novel posture estimation method which is composed of two stages. The first stage is reconstructing lines from stereo images and the second stage is estimate posture by reconstructed lines. Accuracy of line detection is better than the point detection. So our method have better accuracy than the methods base on points.
Numerosity estimation in visual stimuli in the absence of luminance-based cues.
Directory of Open Access Journals (Sweden)
Peter Kramer
2011-02-01
Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.
Response-based estimation of sea state parameters - Influence of filtering
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
Reliable estimation of the on-site sea state parameters is essential to decision support systems for safe navigation of ships. The wave spectrum can be estimated from procedures based on measured ship responses. The paper deals with two procedures—Bayesian Modelling and Parametric Modelling...... parameters—are carried out for a large container vessel. The study shows that filtering has an influence on the estimations, since high-frequency components of the wave excitations are not estimated as accurately as lower frequency components....
Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2010-01-01
We consider the problem of estimating the fundamental frequency of periodic signals such as audio and speech. A novel estimation method based on polynomial rooting of the harmonic MUltiple SIgnal Classiﬁcation (HMUSIC) is presented. By applying polynomial rooting, we obtain two signiﬁcant...... improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...
HAAR TRANSFORM BASED ESTIMATION OF CHLOROPHYLL AND STRUCTURE OF THE LEAF
Abhinav Arora; R. Menaka; Shivangi Gupta; Archit Mishra
2013-01-01
In this paper, the health of a plant is estimated using various non-destructive Image Processing Techniques. Chlorophyll content was detected based on colour Image Processing. The Haar transform is applied to get size of leaf and the parameters.
Estimation of direction of arrival of a moving target using subspace based approaches
Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.
2016-05-01
In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.
RSPF-based Prognosis Framework for Estimation of Remaining Useful Life in Energy Storage Devices
National Aeronautics and Space Administration — This paper presents a case study where a RSPF-based prognosis framework is applied to estimate the remaining useful life of an energy storage device (Li-Ion...
Carletta, N.; Mullendore, G. L.; Xi, B.; Feng, Z.; Dong, X.
2013-12-01
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which can affect cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study is to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from individual radars. Three methods were used to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields. The best method for locating the LMD was determined to be the method that uses a horizontal reflectivity texture-based technique to determine convective cores and a multi-layer echo identification to determine anvil locations. The methodology was found to work in many but not all cases. The methodology works best when applied to convective systems with mature updrafts, and is most accurate with convective lines and single cells. A time lag is present in the reflectivity based LMD compared to the vertical mass divergence based LMD because the reflectivity method is dependent on anvil growth. This methodology was then applied to archived NEXRAD 3D mosaic radar data. The regions of analysis were chosen to coincide with the observation regions for the Deep Convective Clouds and Chemistry Experiment (DC3): the Colorado Foothills, Southern Plains (OK/TX), and Southeast US (AL). These three regions provide a wide variety of convection. The dates analyzed were from May and June of 2012 so the results can be compared to future DC3 studies. The variability of detrainment heights for the early convective season for these different geographical regions will be presented.
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
Attigala, Lakshmi; Wysocki, William P; Duvall, Melvin R; Clark, Lynn G
2016-08-01
We explored phylogenetic relationships among the twelve lineages of the temperate woody bamboo clade (tribe Arundinarieae) based on plastid genome (plastome) sequence data. A representative sample of 28 taxa was used and maximum parsimony, maximum likelihood and Bayesian inference analyses were conducted to estimate the Arundinarieae phylogeny. All the previously recognized clades of Arundinarieae were supported, with Ampelocalamus calcareus (Clade XI) as sister to the rest of the temperate woody bamboos. Well supported sister relationships between Bergbambos tessellata (Clade I) and Thamnocalamus spathiflorus (Clade VII) and between Kuruna (Clade XII) and Chimonocalmus (Clade III) were revealed by the current study. The plastome topology was tested by taxon removal experiments and alternative hypothesis testing and the results supported the current plastome phylogeny as robust. Neighbor-net analyses showed few phylogenetic signal conflicts, but suggested some potentially complex relationships among these taxa. Analyses of morphological character evolution of rhizomes and reproductive structures revealed that pachymorph rhizomes were most likely the ancestral state in Arundinarieae. In contrast leptomorph rhizomes either evolved once with reversions to the pachymorph condition or multiple times in Arundinarieae. Further, pseudospikelets evolved independently at least twice in the Arundinarieae, but the ancestral state is ambiguous. Copyright © 2016 Elsevier Inc. All rights reserved.
Kerfdr: a semi-parametric kernel-based approach to local false discovery rate estimation
Directory of Open Access Journals (Sweden)
Robin Stephane
2009-03-01
Full Text Available Abstract Background The use of current high-throughput genetic, genomic and post-genomic data leads to the simultaneous evaluation of a large number of statistical hypothesis and, at the same time, to the multiple-testing problem. As an alternative to the too conservative Family-Wise Error-Rate (FWER, the False Discovery Rate (FDR has appeared for the last ten years as more appropriate to handle this problem. However one drawback of FDR is related to a given rejection region for the considered statistics, attributing the same value to those that are close to the boundary and those that are not. As a result, the local FDR has been recently proposed to quantify the specific probability for a given null hypothesis to be true. Results In this context we present a semi-parametric approach based on kernel estimators which is applied to different high-throughput biological data such as patterns in DNA sequences, genes expression and genome-wide association studies. Conclusion The proposed method has the practical advantages, over existing approaches, to consider complex heterogeneities in the alternative hypothesis, to take into account prior information (from an expert judgment or previous studies by allowing a semi-supervised mode, and to deal with truncated distributions such as those obtained in Monte-Carlo simulations. This method has been implemented and is available through the R package kerfdr via the CRAN or at http://stat.genopole.cnrs.fr/software/kerfdr.
DEFF Research Database (Denmark)
Pittalà, Fabio; Hauske, Fabian N.; Ye, Yabin
2012-01-01
Efficient channel estimation for signal equalization and OPM based on short CAZAC sequences with QPSK and 8PSK constellation formats is demonstrated in a 224-Gb/s PDM 16-QAM optical linear transmission system.......Efficient channel estimation for signal equalization and OPM based on short CAZAC sequences with QPSK and 8PSK constellation formats is demonstrated in a 224-Gb/s PDM 16-QAM optical linear transmission system....
Photometry-based estimation of the total number of stars in the Universe.
Manojlović, Lazo M
2015-07-20
A novel photometry-based estimation of the total number of stars in the Universe is presented. The estimation method is based on the energy conservation law and actual measurements of the extragalactic background light levels. By assuming that every radiated photon is kept within the Universe volume, i.e., by approximating the Universe as an integrating cavity without losses, the total number of stars in the Universe of about 6×1022 has been obtained.
Risk-based surveillance: Estimating the effect of unwarranted confounder adjustment
DEFF Research Database (Denmark)
Willeberg, Preben; Nielsen, Liza Rosenbaum; Salman, Mo
2011-01-01
We estimated the effects of confounder adjustment as a part of the underlying quantitative risk assessments on the performance of a hypothetical example of a risk-based surveillance system, in which a single risk factor would be used to identify high risk sampling units for testing. The differences...... considered for their appropriateness, if the risk estimates are to be used for informing risk-based surveillance systems....
Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation
Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou
2018-03-01
Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.
High Resolution DOA Estimation Using Unwrapped Phase Information of MUSIC-Based Noise Subspace
Ichige, Koichi; Saito, Kazuhiko; Arai, Hiroyuki
This paper presents a high resolution Direction-Of-Arrival (DOA) estimation method using unwrapped phase information of MUSIC-based noise subspace. Superresolution DOA estimation methods such as MUSIC, Root-MUSIC and ESPRIT methods are paid great attention because of their brilliant properties in estimating DOAs of incident signals. Those methods achieve high accuracy in estimating DOAs in a good propagation environment, but would fail to estimate DOAs in severe environments like low Signal-to-Noise Ratio (SNR), small number of snapshots, or when incident waves are coming from close angles. In MUSIC method, its spectrum is calculated based on the absolute value of the inner product between array response and noise eigenvectors, means that MUSIC employs only the amplitude characteristics and does not use any phase characteristics. Recalling that phase characteristics plays an important role in signal and image processing, we expect that DOA estimation accuracy could be further improved using phase information in addition to MUSIC spectrum. This paper develops a procedure to obtain an accurate spectrum for DOA estimation using unwrapped and differentiated phase information of MUSIC-based noise subspace. Performance of the proposed method is evaluated through computer simulation in comparison with some conventional estimation methods.
Root-MUSIC Based Angle Estimation for MIMO Radar with Unknown Mutual Coupling
Directory of Open Access Journals (Sweden)
Jianfeng Li
2014-01-01
Full Text Available Direction of arrival (DOA estimation problem for multiple-input multiple-output (MIMO radar with unknown mutual coupling is studied, and an algorithm for the DOA estimation based on root multiple signal classification (MUSIC is proposed. Firstly, according to the Toeplitz structure of the mutual coupling matrix, output data of some specified sensors are selected to eliminate the influence of the mutual coupling. Then the reduced-dimension transformation is applied to make the computation burden lower as well as obtain a Vandermonde structure of the direction matrix. Finally, Root-MUSIC can be adopted for the angle estimation. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT-like algorithm and MUSIC-like algorithm. Furthermore, the proposed algorithm has lower complexity than them. The simulation results verify the effectiveness of the algorithm, and the theoretical estimation error of the algorithm is also derived.
An Estimator for Attitude and Heading Reference Systems Based on Virtual Horizontal Reference
DEFF Research Database (Denmark)
Wang, Yunlong; Soltani, Mohsen; Hussain, Dil muhammed Akbar
2016-01-01
The output of the attitude determination systems suffers from large errors in case of accelerometer malfunctions. In this paper, an attitude estimator, based on Virtual Horizontal Reference (VHR), is designed for an Attitude Heading and Reference System (AHRS) to cope with this problem. The VHR...... makes it possible to correct the output of roll and pitch of the attitude estimator in the situations without accelerometer measurements, which cannot be achieved by the conventional nonlinear attitude estimator. The performance of VHR is tested both in simulation and hardware environment to validate...... their estimation performance. Moreover, the hardware test results are compared with that of a high-precision commercial AHRS to verify the estimation results. The implemented algorithm has shown high accuracy of attitude estimation that makes the system suitable for many applications....
DEFF Research Database (Denmark)
Montazeri, Najmeh; Nielsen, Ulrik Dam
2014-01-01
An accurate estimation of the ocean wave directional spectrum at the location of an advancing ship is very useful for the ship master to improve operation and safety in a seaway. Research has been conducted to obtain sea state estimates by the Wave Buoy Analogy. The method deals with processing...... the ship’s wave-induced responses based on different statistical inferences including parametric and non-parametric approaches. This paper considers a concept to improve the estimate obtained by the parametric method for sea state estimation. The idea is illustrated by an analysis made on full...... are considered as the input of the estimation process. A comparison is made between the results and also with some in-hand outputs from other estimation sources, e.g., wave radar measurements and sea surface elevation by microwave sensors. The discussed and analyzed procedure could also lead to an automatic...
Adaptive Distance Estimation Based on RSSI in 802.15.4 Network
Directory of Open Access Journals (Sweden)
M. Botta
2013-12-01
Full Text Available This paper deals with the distance estimation issue in 802.15.4 wireless sensor networks. On a basis of signal strength of received frames a distance between two sensor nodes using the log-normal shadowing radio propagation model (LNSM is estimated. Basic problems of signal strength based systems are variation of RSSI parameter and correct calibration of coefficients for the LNSM model. For distance estimation in wireless networks, static and dynamic methods of calibration were already introduced. Static calibration has significant drawback in adaptation to the dynamical environment changes. Proposed work deals with an experimental validation of dynamic calibration method for distance estimation in wireless sensor networks. This method for dynamic estimation of radio environment parameters results in the significant improvements in the distance estimation accuracy in comparison with the known static methods.
Interpretation and Implications of Previous Sea Pay Estimates
2015-04-01
simple example of the Navy paying “rents” in the labor market .... 30 Figure 13. When disutility of general Navy duty and sea duty are perfectly...their true demand for a product. This mechanism allows companies to tailor prices to specific segments of their market and, in so doing, to maximize...entire use of bandwidth. Telecommunications companies use this distinction to construct a market segmentation technique that is the heart of the non
Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun
2018-02-01
In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.
Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function
LIU, Liang; WEI, Ping; LIAO, Hong Shu
Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
PAC learning using Nadaraya-Watson estimator based on orthonormal systems
Energy Technology Data Exchange (ETDEWEB)
Qiao, Hongzhu [Fort Valley State College, GA (United States). Dept. of Mathematics and Physics; Rao, N.S.V.; Protopopescu, V. [Oak Ridge National Lab., TN (United States)
1997-08-01
Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.
Estimation of Multiple Pitches in Stereophonic Mixtures using a Codebook-based Approach
DEFF Research Database (Denmark)
Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2017-01-01
on the extended invariance principle (EXIP), and a codebook of realistic amplitude vectors. For each fundamental frequency candidate in each of the sources, the amplitude estimates are mapped to entries in the codebook, and the pitch and model order are estimated jointly. The performance of the proposed method......In this paper, a method for multi-pitch estimation of stereophonic mixtures of multiple harmonic signals is presented. The method is based on a signal model which takes the amplitude and delay panning parameters of the sources in a stereophonic mixture into account. Furthermore, the method is based...
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation
Li, D.; Durand, M. T.; Margulis, S. A.
2012-12-01
Snowpack serves as a critical water resource and an important climate indicator. Accurately estimating snow water equivalent (SWE) and melt timing has both civil and scientific merits. Physical process based multi-layer land surface models (LSM) characterize snowpack by tracking the energy balance and mass balance in each layer. However, in terms of the number of layers used to model the snowpack stratigraphy, as well as the complexity of the simulated mass/energy exchanges in each single layer, significant variances exist among different LSMs. Previous work has largely focused on assessing the impact of layering and stratigraphy representation on mass and energy balance, with little attention paid to the implications of these factors on predicted microwave brightness temperature (Tb). In this paper, three LSMs with varying snow layer schemes: SSiB (3-layer), CoLM (5-layer), and SNOWPACK (N-layer), are coupled to the Microwave Emission from Multi-Layer Snowpacks (MEMLS) radiative transfer model (RTM) to simulate the snowpack mass/energy budgets and microwave signature over a full season. The simulations are performed at five in-situ gage locations in the Kern River Basin, Sierra Nevada, CA where it is known that large snow events occur that can be problematic to represent using a small number of snow layers. A particular emphasis is placed on assessment of the impact of layering scheme on the results. Preliminary results show that even for SSiB which has a relative simple empirical layering scheme, the modeled annual SWE could be highly correlated with the in-situ SWE (r¬2=0.91) if the precipitation bias is corrected, also, the comparison between the Tb simulated by SSiB+MEMLS and the downscaled AMSR-E Tb measurements shows a correlation coefficient of 0.94 during the snow accumulation season (Oct to Apr) if the grain growth parameters and the soil snow reflectivity is properly calibrated. Future work includes comparing SWE and Tb from all threemodels and
Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering
Directory of Open Access Journals (Sweden)
Matthew Parkan
2018-02-01
Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.
Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates.
Directory of Open Access Journals (Sweden)
Caroline A Curtis
Full Text Available Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance.We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the 'plant characteristics' information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN, and tested whether ΔCN was influenced by growth form or range size.Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001. The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation.Our results show that distribution data are consistently broader than USDA PLANTS experts' knowledge and likely provide more robust estimates of climatic tolerance, especially for
H∞state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bootstrap-based confidence estimation in PCA and multivariate statistical process control
DEFF Research Database (Denmark)
Babamoradi, Hamid
Traditional/Asymptotic confidence estimation has limited applicability since it needs statistical theories to estimate the confidences, which are not available for all indicators/parameters. Furthermore, in case the theories are available for a specific indicator/parameter, the theories are based...... the recommended decisions) to build rational confidence limits were given. Two NIR datasets were used to study the effect of outliers and bimodal distributions on the bootstrap-based limits. The results showed that bootstrapping can give reasonable estimate of distributions for scores and loadings. It can also...... on assumptions that do not always hold in practice. The aim of this thesis was to illustrate the concept of bootstrap-based confidence estimation in PCA and MSPC. It particularly shows how to build bootstrapbased confidence limits in these areas to be used as alternative to the traditional/asymptotic limits...
DEFF Research Database (Denmark)
Eigaard, Ole Ritzau; Bastardie, Francois; Breen, Mike
2016-01-01
This study assesses the seabed pressure of towed fishing gears and models the physical impact (area and depth of seabed penetration) from trip-based information of vessel size, gear type, and catch. Traditionally fishing pressures are calculated top-down by making use of large-scale statistics......'s impact. An industry-based survey covering 13 countries provided the basis for estimating the relative impact-area contributions from individual gear components, whereas sediment penetration was estimated based on a literature review. For each gear group, a vessel size–gear size relationship was estimated...... for the definition, estimation, and monitoring of fishing pressure indicators, which are discussed in the context of an ecosystem approach to fisheries management...
Petrie, Joshua G; Eisenberg, Marisa C; Ng, Sophia; Malosh, Ryan E; Lee, Kyu Han; Ohmit, Suzanne E; Monto, Arnold S
2017-12-15
Household cohort studies are an important design for the study of respiratory virus transmission. Inferences from these studies can be improved through the use of mechanistic models to account for household structure and risk as an alternative to traditional regression models. We adapted a previously described individual-based transmission hazard (TH) model and assessed its utility for analyzing data from a household cohort maintained in part for study of influenza vaccine effectiveness (VE). Households with ≥4 individuals, including ≥2 children hazards (PH) models. For each individual, TH models estimated hazards of infection from the community and each infected household contact. Influenza A(H3N2) infection was laboratory-confirmed in 58 (4%) subjects. VE estimates from both models were similarly low overall (Cox PH: 20%, 95% confidence interval: -57, 59; TH: 27%, 95% credible interval: -23, 58) and highest for children Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.
2012-01-01
Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563
State of charge estimation for lithium-ion pouch batteries based on stress measurement
International Nuclear Information System (INIS)
Dai, Haifeng; Yu, Chenchen; Wei, Xuezhe; Sun, Zechang
2017-01-01
State of charge (SOC) estimation is one of the important tasks of battery management system (BMS). Being different from other researches, a novel method of SOC estimation for pouch lithium-ion battery cells based on stress measurement is proposed. With a comprehensive experimental study, we find that, the stress of the battery during charge/discharge is composed of the static stress and the dynamic stress. The static stress, which is the measured stress in equilibrium state, corresponds to SOC, this phenomenon facilitates the design of our stress-based SOC estimation. The dynamic stress, on the other hand, is influenced by multiple factors including charge accumulation or depletion, current and historical operation, thus a multiple regression model of the dynamic stress is established. Based on the relationship between static stress and SOC, as well as the dynamic stress modeling, the SOC estimation method is founded. Experimental results show that the stress-based method performs well with a good accuracy, and this method offers a novel perspective for SOC estimation. - Highlights: • A State of Charge estimator based on stress measurement is proposed. • The stress during charge and discharge is investigated with comprehensive experiments. • Effects of SOC, current, and operation history on battery stress are well studied. • A multiple regression model of the dynamic stress is established.
Tensor-Based Methods for Blind Spatial Signature Estimation in Multidimensional Sensor Arrays
Directory of Open Access Journals (Sweden)
Paulo R. B. Gomes
2017-01-01
Full Text Available The estimation of spatial signatures and spatial frequencies is crucial for several practical applications such as radar, sonar, and wireless communications. In this paper, we propose two generalized iterative estimation algorithms to the case in which a multidimensional (R-D sensor array is used at the receiver. The first tensor-based algorithm is an R-D blind spatial signature estimator that operates in scenarios where the source’s covariance matrix is nondiagonal and unknown. The second tensor-based algorithm is formulated for the case in which the sources are uncorrelated and exploits the dual-symmetry of the covariance tensor. Additionally, a new tensor-based formulation is proposed for an L-shaped array configuration. Simulation results show that our proposed schemes outperform the state-of-the-art matrix-based and tensor-based techniques.
Zhu, Jun-Wei; Yang, Guang-Hong; Zhang, Wen-An; Yu, Li
2017-10-17
This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some specified parameters. Finally, a simulation example of aircraft demonstrates the effectiveness of the designed tracking protocol.This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
Parallel Factor-Based Model for Two-Dimensional Direction Estimation
Directory of Open Access Journals (Sweden)
Nizar Tayem
2017-01-01
Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.
Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data
Directory of Open Access Journals (Sweden)
Wei-Kuang Lai
2016-02-01
Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.
SHTEREOM I SIMPLE WINDOWS® BASED SOFTWARE FOR STEREOLOGY. VOLUME AND NUMBER ESTIMATIONS
Directory of Open Access Journals (Sweden)
Emin Oğuzhan Oğuz
2011-05-01
Full Text Available Stereology has been earlier defined by Wiebel (1970 to be: "a body of mathematical methods relating to three dimensional parameters defining the structure from two dimensional measurements obtainable on sections of the structure." SHTEREOM I is a simple windows-based software for stereological estimation. In this first part, we describe the implementation of the number and volume estimation tools for unbiased design-based stereology. This software is produced in Visual Basic and can be used on personal computers operated by Microsoft Windows® operating systems that are connected to a conventional camera attached to a microscope and a microcator or a simple dial gauge. Microsoft NET Framework version 1.1 also needs to be downloaded for full use. The features of the SHTEREOM I software are illustrated through examples of stereological estimations in terms of volume and particle numbers for different magnifications (4X–100X. Point-counting grids are available for area estimations and for use with the most efficient volume estimation tool, the Cavalieri technique and are applied to Lizard testicle volume. An unbiased counting frame system is available for number estimations of the objects under investigation, and an on-screen manual stepping module for number estimations through the optical fractionator method is also available for the measurement of increments along the X and Y axes of the microscope stage for the estimation of rat brain hippocampal pyramidal neurons.
Two-Step Time of Arrival Estimation for Pulse-Based Ultra-Wideband Systems
Directory of Open Access Journals (Sweden)
H. Vincent Poor
2008-05-01
Full Text Available In cooperative localization systems, wireless nodes need to exchange accurate position-related information such as time-of-arrival (TOA and angle-of-arrival (AOA, in order to obtain accurate location information. One alternative for providing accurate position-related information is to use ultra-wideband (UWB signals. The high time resolution of UWB signals presents a potential for very accurate positioning based on TOA estimation. However, it is challenging to realize very accurate positioning systems in practical scenarios, due to both complexity/cost constraints and adverse channel conditions such as multipath propagation. In this paper, a two-step TOA estimation algorithm is proposed for UWB systems in order to provide accurate TOA estimation under practical constraints. In order to speed up the estimation process, the first step estimates a coarse TOA of the received signal based on received signal energy. Then, in the second step, the arrival time of the first signal path is estimated by considering a hypothesis testing approach. The proposed scheme uses low-rate correlation outputs and is able to perform accurate TOA estimation in reasonable time intervals. The simulation results are presented to analyze the performance of the estimator.
An estimation framework for building information modeling (BIM)-based demolition waste by type.
Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook
2017-12-01
Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.
Lin, Chin-Teng; Tsai, Shu-Fang; Ko, Li-Wei
2013-10-01
Motion sickness is a common experience for many people. Several previous researches indicated that motion sickness has a negative effect on driving performance and sometimes leads to serious traffic accidents because of a decline in a person's ability to maintain self-control. This safety issue has motivated us to find a way to prevent vehicle accidents. Our target was to determine a set of valid motion sickness indicators that would predict the occurrence of a person's motion sickness as soon as possible. A successful method for the early detection of motion sickness will help us to construct a cognitive monitoring system. Such a monitoring system can alert people before they become sick and prevent them from being distracted by various motion sickness symptoms while driving or riding in a car. In our past researches, we investigated the physiological changes that occur during the transition of a passenger's cognitive state using electroencephalography (EEG) power spectrum analysis, and we found that the EEG power responses in the left and right motors, parietal, lateral occipital, and occipital midline brain areas were more highly correlated to subjective sickness levels than other brain areas. In this paper, we propose the use of a self-organizing neural fuzzy inference network (SONFIN) to estimate a driver's/passenger's sickness level based on EEG features that have been extracted online from five motion sickness-related brain areas, while either in real or virtual vehicle environments. The results show that our proposed learning system is capable of extracting a set of valid motion sickness indicators that originated from EEG dynamics, and through SONFIN, a neuro-fuzzy prediction model, we successfully translated the set of motion sickness indicators into motion sickness levels. The overall performance of this proposed EEG-based learning system can achieve an average prediction accuracy of ~82%.
Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones.
Zhou, Yan; Zheng, Xianwei; Chen, Ruizhi; Xiong, Hanjiang; Guo, Sheng
2018-01-17
Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR) to achieve low-cost and bias-free pedestrian tracking. However, this works only in areas with dense map constraints and the error accumulates in open areas. In order to achieve reliable localization without map constraints, an improved image-based localization aided pedestrian trajectory estimation method is proposed in this paper. The image-based localization recovers the pose of the camera from the 2D-3D correspondences between the 2D image positions and the 3D points of the scene model, previously reconstructed by a structure-from-motion (SfM) pipeline. This enables us to determine the initial location and eliminate the accumulative error of PDR when an image is successfully registered. However, the image is not always registered since the traditional 2D-to-3D matching rejects more and more correct matches when the scene becomes large. We thus adopt a robust image registration strategy that recovers initially unregistered images by integrating 3D-to-2D search. In the process, the visibility and co-visibility information is adopted to improve the efficiency when searching for the correspondences from both sides. The performance of the proposed method was evaluated through several experiments and the results demonstrate that it can offer highly acceptable pedestrian localization results in long-term tracking, with an error of only 0.56 m, without the need for dedicated infrastructures.
Image-Based Localization Aided Indoor Pedestrian Trajectory Estimation Using Smartphones
Directory of Open Access Journals (Sweden)
Yan Zhou
2018-01-01
Full Text Available Accurately determining pedestrian location in indoor environments using consumer smartphones is a significant step in the development of ubiquitous localization services. Many different map-matching methods have been combined with pedestrian dead reckoning (PDR to achieve low-cost and bias-free pedestrian tracking. However, this works only in areas with dense map constraints and the error accumulates in open areas. In order to achieve reliable localization without map constraints, an improved image-based localization aided pedestrian trajectory estimation method is proposed in this paper. The image-based localization recovers the pose of the camera from the 2D-3D correspondences between the 2D image positions and the 3D points of the scene model, previously reconstructed by a structure-from-motion (SfM pipeline. This enables us to determine the initial location and eliminate the accumulative error of PDR when an image is successfully registered. However, the image is not always registered since the traditional 2D-to-3D matching rejects more and more correct matches when the scene becomes large. We thus adopt a robust image registration strategy that recovers initially unregistered images by integrating 3D-to-2D search. In the process, the visibility and co-visibility information is adopted to improve the efficiency when searching for the correspondences from both sides. The performance of the proposed method was evaluated through several experiments and the results demonstrate that it can offer highly acceptable pedestrian localization results in long-term tracking, with an error of only 0.56 m, without the need for dedicated infrastructures.
Ontology-Based Representation and Reasoning in Building Construction Cost Estimation in China
Directory of Open Access Journals (Sweden)
Xin Liu
2016-08-01
Full Text Available Cost estimation is one of the most critical tasks for building construction project management. The existing building construction cost estimation methods of many countries, including China, require information from several sources, including material, labor, and equipment, and tend to be manual, time-consuming, and error-prone. To solve these problems, a building construction cost estimation model based on ontology representation and reasoning is established, which includes three major components, i.e., concept model ontology, work item ontology, and construction condition ontology. Using this model, the cost estimation information is modeled into OWL axioms and SWRL rules that leverage the semantically rich ontology representation to reason about cost estimation. Based on OWL axioms and SWRL rules, the cost estimation information can be translated into a set of concept models, work items, and construction conditions associated with the specific construction conditions. The proposed method is demonstrated in Protégé 3.4.8 through case studies based on the Measurement Specifications of Building Construction and Decoration Engineering taken from GB 50500-2013 (the Chinese national mandatory specifications. Finally, this research discusses the limitations of the proposed method and future research directions. The proposed method can help a building construction cost estimator extract information more easily and quickly.
Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries
Directory of Open Access Journals (Sweden)
Zhongyue Zou
2014-08-01
Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.
Energy Technology Data Exchange (ETDEWEB)
Irazola, L; Terron, J; Sanchez-Doblado, F [Departamento de Fisiologia Medica y Biofisica, Universidad de Sevilla (Spain); Servicio de Radiofisica, Hospital Universitario Virgen Macarena, Sevilla (Spain); Domingo, C; Romero-Exposito, M [Departament de Fisica, Universitat Autonoma de Barcelona, Bellaterra (Spain); Garcia-Fuste, M [Health and Safety Department, ALBA Synchrotron Light Source, Cerdanyola del Valles (Spain); Sanchez-Nieto, B [Instituto de Fisica, Pontificia Universidad Catolica de Chile, Santiago (Chile); Bedogni, R [Laboratori Nazionali di Frascati, Istituto Nazionale di Fisica Nucleare (INFN) (Italy)
2015-06-15
Purpose: Previous measurements with Bonner spheres{sup 1} showed that normalized neutron spectra are equal for the majority of the existing linacs{sup 2}. This information, in addition to thermal neutron fluences obtained in the characterization procedure{sup 3}3, would allow to estimate neutron doses accidentally received by exposed workers, without the need of an extra experimental measurement. Methods: Monte Carlo (MC) simulations demonstrated that the thermal neutron fluence distribution inside the bunker is quite uniform, as a consequence of multiple scatter in the walls{sup 4}. Although inverse square law is approximately valid for the fast component, a more precise calculation could be obtained with a generic fast fluence distribution map around the linac, from MC simulations{sup 4}. Thus, measurements of thermal neutron fluences performed during the characterization procedure{sup 3}, together with a generic unitary spectra{sup 2}, would allow to estimate the total neutron fluences and H*(10) at any point{sup 5}. As an example, we compared estimations with Bonner sphere measurements{sup 1}, for two points in five facilities: 3 Siemens (15–23 MV), Elekta (15 MV) and Varian (15 MV). Results: Thermal neutron fluences obtained from characterization, are within (0.2–1.6×10{sup 6}) cm−{sup 2}•Gy{sup −1} for the five studied facilities. This implies ambient equivalent doses ranging from (0.27–2.01) mSv/Gy 50 cm far from the isocenter and (0.03–0.26) mSv/Gy at detector location with an average deviation of ±12.1% respect to Bonner measurements. Conclusion: The good results obtained demonstrate that neutron fluence and H*(10) can be estimated based on: (a) characterization procedure established for patient risk estimation in each facility, (b) generic unitary neutron spectrum and (c) generic MC map distribution of the fast component. [1] Radiat. Meas (2010) 45: 1391 – 1397; [2] Phys. Med. Biol (2012) 5 7:6167–6191; [3] Med. Phys (2015) 42
A survey on OFDM channel estimation techniques based on denoising strategies
Directory of Open Access Journals (Sweden)
Pallaviram Sure
2017-04-01
Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network
Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.
2015-01-01
Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35 km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.
Image-based non-contact monitoring of skin texture changed by piloerection for emotion estimation
Uchida, Mihiro; Akaho, Rina; Ogawa, Keiko; Tsumura, Norimichi
2018-02-01
In this paper, we find the effective feature values of skin textures captured by non-contact camera to monitor piloerection on the skin for emotion estimation. Recently, emotion estimation is required for service robots to interact with human more naturally. There are a lot of researches of estimating emotion and additional methods are required to improve emotion estimation because using only a few methods may not give enough information for emotion estimation. In the previous study, it is necessary to fix a device on the subject's arm for detecting piloerection, but the contact monitoring can be stress itself and distract the subject from concentrating in the stimuli and evoking strong emotion. So, we focused on the piloerection as the object obtained with non-contact methods. The piloerection is observed as goose bumps on the skin when the subject is emotionally moved, scared and so on. This phenomenon is caused by contraction of arrector pili muscles with the activation of sympathetic nervous system. This piloerection changes skin texture. Skin texture is important in the cosmetic industry to evaluate skin condition. Therefore, we thought that it will be effective to evaluate the condition of skin texture for emotion estimation. The evaluations were performed by extracting the effective feature values from skin textures captured with a high resolution camera. The effective feature values should have high correlation with the degree of piloerection. In this paper, we found that standard deviation of short-line inclination angles in the texture is well correlated with the degree of piloerection.
Hypersonic entry vehicle state estimation using nonlinearity-based adaptive cubature Kalman filters
Sun, Tao; Xin, Ming
2017-05-01
Guidance, navigation, and control of a hypersonic vehicle landing on the Mars rely on precise state feedback information, which is obtained from state estimation. The high uncertainty and nonlinearity of the entry dynamics make the estimation a very challenging problem. In this paper, a new adaptive cubature Kalman filter is proposed for state trajectory estimation of a hypersonic entry vehicle. This new adaptive estimation strategy is based on the measure of nonlinearity of the stochastic system. According to the severity of nonlinearity along the trajectory, the high degree cubature rule or the conventional third degree cubature rule is adaptively used in the cubature Kalman filter. This strategy has the benefit of attaining higher estimation accuracy only when necessary without causing excessive computation load. The simulation results demonstrate that the proposed adaptive filter exhibits better performance than the conventional third-degree cubature Kalman filter while maintaining the same performance as the uniform high degree cubature Kalman filter but with lower computation complexity.
Directory of Open Access Journals (Sweden)
Shaolong Chen
2016-01-01
Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.
Gross domestic product estimation based on electricity utilization by artificial neural network
Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.
2018-01-01
The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.
Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh
1992-01-01
A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.
Response to health insurance by previously uninsured rural children.
Tilford, J M; Robbins, J M; Shema, S J; Farmer, F L
1999-08-01
To examine the healthcare utilization and costs of previously uninsured rural children. Four years of claims data from a school-based health insurance program located in the Mississippi Delta. All children who were not Medicaid-eligible or were uninsured, were eligible for limited benefits under the program. The 1987 National Medical Expenditure Survey (NMES) was used to compare utilization of services. The study represents a natural experiment in the provision of insurance benefits to a previously uninsured population. Premiums for the claims cost were set with little or no information on expected use of services. Claims from the insurer were used to form a panel data set. Mixed model logistic and linear regressions were estimated to determine the response to insurance for several categories of health services. The use of services increased over time and approached the level of utilization in the NMES. Conditional medical expenditures also increased over time. Actuarial estimates of claims cost greatly exceeded actual claims cost. The provision of a limited medical, dental, and optical benefit package cost approximately $20-$24 per member per month in claims paid. An important uncertainty in providing health insurance to previously uninsured populations is whether a pent-up demand exists for health services. Evidence of a pent-up demand for medical services was not supported in this study of rural school-age children. States considering partnerships with private insurers to implement the State Children's Health Insurance Program could lower premium costs by assembling basic data on previously uninsured children.
DEFF Research Database (Denmark)
Schur, Nadine; Hürlimann, Eveline; Garba, Amadou
2011-01-01
Schistosomiasis is a water-based disease that is believed to affect over 200 million people with an estimated 97% of the infections concentrated in Africa. However, these statistics are largely based on population re-adjusted data originally published by Utroska and colleagues more than 20 years...
A multivariate family-based association test using generalized estimating equations : FBAT-GEE
Lange, C; Silverman, SK; Xu, [No Value; Weiss, ST; Laird, NM
In this paper we propose a multivariate extension of family-based association tests based on generalized estimating equations. The test can be applied to multiple phenotypes and to phenotypic data obtained in longitudinal studies without making any distributional assumptions for the phenotypic
Evaluation of Satellite Based Rainfall Estimation over Major River Basins in Africa
Bitew, M. M.; Gebremichael, M.
2012-12-01
Accuracy of satellite rainfall estimates are poorly known over Africa because of sparse ground based observations. We examined four widely used high resolution satellite products: the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) which is near-real-time (TMPA 3B42RT), the TMPA method post-real-time research version seven (TMPA 3B42v7), the Climate Prediction Center's morphing technique (CMORPH) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN). The main objective of the evaluation was to assess the performance of the satellite based estimates in capturing the overall climatological blueprints of rainfall over Africa at various spatio-temporal scale, and inter-comparison of the estimates across the various climatological regimes in Africa. In the tropical, complex terrain region of East Africa, the results show poor skills of satellite rainfall in capturing elevation dependent rainfall structure; microwave based CMORPH and 3B42RT estimates provide relatively accurate estimate of rainfall in high elevation areas but showed excessive overestimation in low elevation, and merging GTS-based rain gauges with the Satellite-Only products deteriorated the accuracy of rainfall estimation in high elevation areas of the Blue Nile. In this study we present the findings over seven other large and sparsely gauged river basins: Sengal (419,659 km2), Jubba (497,655 km2), Volta (407,093 km2), Ogooue (223,656 km2), Ubangi (613,202 km2) Okavango (721,277 km2) and Kasai (925,172 km2) river basins representing different topography and climate system between 250 N and 250 S. The accuracy of those products is assessed using a ground based GPCC datasets and through inter-comparision among the products between 2003 -2011 at a resolution of 25 km by 25-km and 3 hr data. Based on these datasets we present annual, seasonal and monthly spatial structure of rainfall in terms of depth, rainy days
van Dieen, J.H.; Kingma, I.
2005-01-01
Estimates of spinal forces are quite sensitive to model assumptions, especially regarding antagonistic co-contraction. Optimization based models predict co-contraction to be absent, while electromyography (EMG) based models take co-contraction into account, but usually assume equal activation of
Model-Based Load Estimation for Predictive Condition Monitoring of Wind Turbines
DEFF Research Database (Denmark)
Perisic, Nevena; Pederen, Bo Juul; Grunnet, Jacob Deleuran
The main objective of this paper is to present a Load Observer Tool (LOT) for condition monitoring of structural extreme and fatigue loads on the main wind turbine (WTG) components. LOT uses well-known methods from system identification, state estimation and fatigue analysis in a novel approach...... for application in condition monitoring. Fatigue loads are estimated online using a load observer and grey box models which include relevant WTG dynamics. Identification of model parameters and calibration of observer are performed offline using measurements from WTG prototype. Signal processing of estimated load...... signal is performed online, and a Load Indicator Signal (LIS) is formulated as a ratio between current estimated accumulated fatigue loads and its expected value based only on a priori knowledge (WTG dynamics and wind climate). LOT initialisation is based on a priori knowledge and can be obtained using...
2-D DOA Estimation of LFM Signals Based on Dechirping Algorithm and Uniform Circle Array
Directory of Open Access Journals (Sweden)
K. B. Cui
2017-04-01
Full Text Available Based on Dechirping algorithm and uniform circle array(UCA, a new 2-D direction of arrival (DOA estimation algorithm of linear frequency modulation (LFM signals is proposed in this paper. The algorithm uses the thought of Dechirping and regards the signal to be estimated which is received by the reference sensor as the reference signal and proceeds the difference frequency treatment with the signal received by each sensor. So the signal to be estimated becomes a single-frequency signal in each sensor. Then we transform the single-frequency signal to an isolated impulse through Fourier transform (FFT and construct a new array data model based on the prominent parts of the impulse. Finally, we respectively use multiple signal classification (MUSIC algorithm and rotational invariance technique (ESPRIT algorithm to realize 2-D DOA estimation of LFM signals. The simulation results verify the effectiveness of the algorithm proposed.
Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme
DEFF Research Database (Denmark)
Li, Changgang; Zhang, Yaping; Zhang, Hengxu
2017-01-01
Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...... of the proposed model. Some 500kV transmission lines from a provincial power system of China are estimated to demonstrate the applicability of the presented model. The superiority of the proposed model over fixed data selection schemes is also verified....... estimation model with an adaptive data selection scheme based on measured data. Data selection scheme, defined with time window and number of data points, is introduced in the estimation model as additional variables to optimize. The data selection scheme is adaptively adjusted to minimize the relative...
Age estimation by facial analysis based on applications available for smartphones.
Rezende Machado, A L; Dezem, T U; Bruni, A T; Alves da Silva, R H
2017-12-01
Forensic Dentistry has an important role in the human identification cases and, among the analyses that can be performed, age estimation has an important value in establishing an anthropological profile. Modern technology invests for new mechanisms of age estimation: software apps, based on special algorithms, because there is not interference based on personal knowledge, cultural and personal experiences for facial recognition. This research evaluated the use of two different apps: "How Old Do I Look? - Age Camera" and "How Old Am I? - Age Camera, Do You Look Like in Selfie Face Pic?", for age estimation analysis in a sample of 100 people (50 females and 50 males). Univariate and multivariate statistical methods were used to evaluate data. A great reliability was seen when used for the male volunteers. However, for females, no equivalence was found between the real age and the estimated age. These applications presented satisfactory results as an auxiliary method, in male images.
DOA Estimation Based on Real-Valued Cross Correlation Matrix of Coprime Arrays.
Li, Jianfeng; Wang, Feng; Jiang, Defu
2017-03-20
A fast direction of arrival (DOA) estimation method using a real-valued cross-correlation matrix (CCM) of coprime subarrays is proposed. Firstly, real-valued CCM with extended aperture is constructed to obtain the signal subspaces corresponding to the two subarrays. By analysing the relationship between the two subspaces, DOA estimations from the two subarrays are simultaneously obtained with automatic pairing. Finally, unique DOA is determined based on the common results from the two subarrays. Compared to partial spectral search (PSS) method and estimation of signal parameter via rotational invariance (ESPRIT) based method for coprime arrays, the proposed algorithm has lower complexity but achieves better DOA estimation performance and handles more sources. Simulation results verify the effectiveness of the approach.
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2017-06-01
We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.
Tumor size and elasticity estimation using Smartphone-based Compression-Induced scope.
Won, C-H; Goldstein, Jesse; Oleksyuk, Vira; Caroline, Dina; Pascarella, Suzanne
2017-07-01
A simple-to-use, noninvasive, and risk-free system, which will provide accurate identification of potentially life threatening malignant tumors using tactile pressure, is developed. The Smartphone-based Compression-Induced (SCI) Scope will allow physicians to quickly capture the mechanical properties of a benign or malignant tumor with the convenience of a smartphone platform. The size and elasticity property is described using estimating methods from the pressure-induced images of SCI Scope. The device is based on the Apple iPhone 6. The image will be captured through a waveguide. The image information in combination with the force sensor value will be transmitted wirelessly to a computer for processing. The size and elasticity estimation experiments with SCI Scope showed that the size estimation error of 2.31% and estimated relative elastic modulus error of 23.9%.
Hybrid fuzzy charged system search algorithm based state estimation in distribution networks
Directory of Open Access Journals (Sweden)
Sachidananda Prasad
2017-06-01
Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.
Systematic error mitigation in multi-GNSS positioning based on semiparametric estimation
Yu, Wenkun; Ding, Xiaoli; Dai, Wujiao; Chen, Wu
2017-12-01
Joint use of observations from multiple global navigation satellite systems (GNSS) is advantageous in high-accuracy positioning. However, systematic errors in the observations can significantly impact on the positioning accuracy if such errors cannot be properly mitigated. The errors can distort least squares estimations and also affect the results of variance component estimation that is frequently used to determine the stochastic model when observations from multiple GNSS are used. We present an approach that is based on the concept of semiparametric estimation for mitigating the effects of the systematic errors. Experimental results based on both simulated and real GNSS datasets show that the approach is effective, especially when applied before carrying out variance component estimation.
Inhyuck "Steve" Ha; Jessica Hollars Wisniewski
2011-01-01
This pedagogical method to estimate the unemployment rate is appropriate for an undergraduate course in macroeconomics. Class instructors can use the experiment to make many macroeconomic principles readily apparent to the unskilled reader. This experiment examines the dynamics of calculating the unemployment rate by means of an assessment instrument. Students can learn that the unemployment rate is calculated using estimates of the size of the labor force, which includes individuals who are ...
Impact of Base Functional Component Types on Software Functional Size based Effort Estimation
Gencel, Cigdem; Buglione, Luigi
2008-01-01
Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...
Satellite-based ET estimation using Landsat 8 images and SEBAL model
Directory of Open Access Journals (Sweden)
Bruno Bonemberger da Silva
Full Text Available ABSTRACT Estimation of evapotranspiration is a key factor to achieve sustainable water management in irrigated agriculture because it represents water use of crops. Satellite-based estimations provide advantages compared to direct methods as lysimeters especially when the objective is to calculate evapotranspiration at a regional scale. The present study aimed to estimate the actual evapotranspiration (ET at a regional scale, using Landsat 8 - OLI/TIRS images and complementary data collected from a weather station. SEBAL model was used in South-West Paraná, region composed of irrigated and dry agricultural areas, native vegetation and urban areas. Five Landsat 8 images, row 223 and path 78, DOY 336/2013, 19/2014, 35/2014, 131/2014 and 195/2014 were used, from which ET at daily scale was estimated as a residual of the surface energy balance to produce ET maps. The steps for obtain ET using SEBAL include radiometric calibration, calculation of the reflectance, surface albedo, vegetation indexes (NDVI, SAVI and LAI and emissivity. These parameters were obtained based on the reflective bands of the orbital sensor with temperature surface estimated from thermal band. The estimated ET values in agricultural areas, native vegetation and urban areas using SEBAL algorithm were compatible with those shown in the literature and ET errors between the ET estimates from SEBAL model and Penman Monteith FAO 56 equation were less than or equal to 1.00 mm day-1.
Confidence interval based parameter estimation--a new SOCR applet and activity.
Directory of Open Access Journals (Sweden)
Nicolas Christou
Full Text Available Many scientific investigations depend on obtaining data-driven, accurate, robust and computationally-tractable parameter estimates. In the face of unavoidable intrinsic variability, there are different algorithmic approaches, prior assumptions and fundamental principles for computing point and interval estimates. Efficient and reliable parameter estimation is critical in making inference about observable experiments, summarizing process characteristics and prediction of experimental behaviors. In this manuscript, we demonstrate simulation, construction, validation and interpretation of confidence intervals, under various assumptions, using the interactive web-based tools provided by the Statistics Online Computational Resource (http://www.SOCR.ucla.edu. Specifically, we present confidence interval examples for population means, with known or unknown population standard deviation; population variance; population proportion (exact and approximate, as well as confidence intervals based on bootstrapping or the asymptotic properties of the maximum likelihood estimates. Like all SOCR resources, these confidence interval resources may be openly accessed via an Internet-connected Java-enabled browser. The SOCR confidence interval applet enables the user to empirically explore and investigate the effects of the confidence-level, the sample-size and parameter of interest on the corresponding confidence interval. Two applications of the new interval estimation computational library are presented. The first one is a simulation of confidence interval estimating the US unemployment rate and the second application demonstrates the computations of point and interval estimates of hippocampal surface complexity for Alzheimers disease patients, mild cognitive impairment subjects and asymptomatic controls.
Blood velocity estimation using spatio-temporal encoding based on frequency division approach
DEFF Research Database (Denmark)
Gran, Fredrik; Nikolov, Svetoslav; Jensen, Jørgen Arendt
2005-01-01
In this paper a feasibility study of using a spatial encoding technique based on frequency division for blood flow estimation is presented. The spatial encoding is carried out by dividing the available bandwidth of the transducer into a number of narrow frequency bands with approximately disjoint...... spectral support. By assigning one band to one virtual source, all virtual sources can be excited simultaneously. The received echoes are beamformed using Synthetic Transmit Aperture beamforming. The velocity of the moving blood is estimated using a cross- correlation estimator. The simulation tool Field...
Savaux, Vincent
2014-01-01
This book presents an algorithm for the detection of an orthogonal frequency division multiplexing (OFDM) signal in a cognitive radio context by means of a joint and iterative channel and noise estimation technique. Based on the minimum mean square criterion, it performs an accurate detection of a user in a frequency band, by achieving a quasi-optimal channel and noise variance estimation if the signal is present, and by estimating the noise level in the band if the signal is absent. Organized into three chapters, the first chapter provides the background against which the system model is pr
Robust estimation of autoregressive processes using a mixture-based filter-bank
Czech Academy of Sciences Publication Activity Database
Šmídl, V.; Anthony, Q.; Kárný, Miroslav; Guy, Tatiana Valentine
2005-01-01
Roč. 54, č. 4 (2005), s. 315-323 ISSN 0167-6911 R&D Projects: GA AV ČR IBS1075351; GA ČR GA102/03/0049; GA ČR GP102/03/P010; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian estimation * probabilistic mixtures * recursive estimation Subject RIV: BC - Control Systems Theory Impact factor: 1.239, year: 2005 http://library.utia.cas.cz/separaty/historie/karny-robust estimation of autoregressive processes using a mixture-based filter- bank .pdf
Estimation about Active Phase of Trajectory Based on the Satellite Passive Detection
Directory of Open Access Journals (Sweden)
ZHOU Juan
2014-12-01
Full Text Available To estimate a spacecraft trajectory, the observation satellite position should be calculated in advance. Through the simplified equation of motion based on the observation satellite for finishing the second order differential equation. By the MATLAB simulation interpolation operation with the existing list data, solution to satellite of three dimensional position in different moments. By the point wise intersection positioning method to estimating the trajectory of the spacecraft, the position and velocity of the spacecraft in a certain time will be obtained. And using simulated annealing for the mathematical model of a spacecraft trajectory and analyzing the rationality of the estimate.
DEFF Research Database (Denmark)
Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens
2016-01-01
This study presents new group contribution (GC) models for the prediction of Lower and Upper Flammability Limits (LFL and UFL), Flash Point (FP) and Auto Ignition Temperature (AIT) of organic chemicals applying the Marrero/Gani (MG) method. Advanced methods for parameter estimation using robust...... regression and outlier treatment have been applied to achieve high accuracy. Furthermore, linear error propagation based on covariance matrix of estimated parameters was performed. Therefore, every estimated property value of the flammability-related properties is reported together with its corresponding 95...
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
Model-based parameter estimation using cardiovascular response to orthostatic stress
Heldt, T.; Shim, E. B.; Kamm, R. D.; Mark, R. G.
2001-01-01
This paper presents a cardiovascular model that is capable of simulating the short-term (response to gravitational stress and a gradient-based optimization method that allows for the automated estimation of model parameters from simulated or experimental data. We perform a sensitivity analysis of the transient heart rate response to determine which parameters of the model impact the heart rate dynamics significantly. We subsequently include only those parameters in the estimation routine that impact the transient heart rate dynamics substantially. We apply the estimation algorithm to both simulated and real data and showed that restriction to the 20 most important parameters does not impair our ability to match the data.
A theory of timing in scintillation counters based on maximum likelihood estimation
International Nuclear Information System (INIS)
Tomitani, Takehiro
1982-01-01
A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)
A Fast Algorithm for Maximum Likelihood-based Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2015-01-01
including a maximum likelihood (ML) approach. Unfortunately, the ML estimator has a very high computational complexity, and the more inaccurate, but faster correlation-based estimators are therefore often used instead. In this paper, we propose a fast algorithm for the evaluation of the ML cost function...... for complex-valued data over all frequencies on a Fourier grid and up to a maximum model order. The proposed algorithm significantly reduces the computational complexity to a level not far from the complexity of the popular harmonic summation method which is an approximate ML estimator....
Taasti, Vicki T.; Michalak, Gregory J.; Hansen, David C.; Deisher, Amanda J.; Kruse, Jon J.; Krauss, Bernhard; Muren, Ludvig P.; Petersen, Jørgen B. B.; McCollough, Cynthia H.
2018-01-01
Dual energy CT (DECT) has been shown, in theoretical and phantom studies, to improve the stopping power ratio (SPR) determination used for proton treatment planning compared to the use of single energy CT (SECT). However, it has not been shown that this also extends to organic tissues. The purpose of this study was therefore to investigate the accuracy of SPR estimation for fresh pork and beef tissue samples used as surrogates of human tissues. The reference SPRs for fourteen tissue samples, which included fat, muscle and femur bone, were measured using proton pencil beams. The tissue samples were subsequently CT scanned using four different scanners with different dual energy acquisition modes, giving in total six DECT-based SPR estimations for each sample. The SPR was estimated using a proprietary algorithm (syngo.via DE Rho/Z Maps, Siemens Healthcare, Forchheim, Germany) for extracting the electron density and the effective atomic number. SECT images were also acquired and SECT-based SPR estimations were performed using a clinical Hounsfield look-up table. The mean and standard deviation of the SPR over large volume-of-interests were calculated. For the six different DECT acquisition methods, the root-mean-square errors (RMSEs) for the SPR estimates over all tissue samples were between 0.9% and 1.5%. For the SECT-based SPR estimation the RMSE was 2.8%. For one DECT acquisition method, a positive bias was seen in the SPR estimates, having a mean error of 1.3%. The largest errors were found in the very dense cortical bone from a beef femur. This study confirms the advantages of DECT-based SPR estimation although good results were also obtained using SECT for most tissues.
Estimation about Active Phase of Trajectory Based on the Satellite Passive Detection
ZHOU Juan; ZHONG Maosheng
2014-01-01
To estimate a spacecraft trajectory, the observation satellite position should be calculated in advance. Through the simplified equation of motion based on the observation satellite for finishing the second order differential equation. By the MATLAB simulation interpolation operation with the existing list data, solution to satellite of three dimensional position in different moments. By the point wise intersection positioning method to estimating the trajectory of the spacecraft, the positio...
Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model
Shulgin, A.
2015-01-01
Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...
Directory of Open Access Journals (Sweden)
Yunsick Sung
2016-11-01
Full Text Available Given that location information is the key to providing a variety of services in sustainable indoor computing environments, it is required to obtain accurate locations. Locations can be estimated by three distances from three fixed points. Therefore, if the distance between two points can be measured or estimated accurately, the location in indoor environments can be estimated. To increase the accuracy of the measured distance, noise filtering, signal revision, and distance estimation processes are generally performed. This paper proposes a novel framework for estimating the distance between a beacon and an access point (AP in a sustainable indoor computing environment. Diverse types of received strength signal indications (RSSIs are used for WiFi, Bluetooth, and radio signals, and the proposed distance estimation framework is unique in that it is independent of the specific wireless signal involved, being based on the Bluetooth signal of the beacon. Generally, RSSI measurement, noise filtering, and revision are required for distance estimation using RSSIs. The employed RSSIs are first measured from an AP, with multiple APs sometimes used to increase the accuracy of the distance estimation. Owing to the inevitable presence of noise in the measured RSSIs, the application of noise filtering is essential, and further revision is used to address the inaccuracy and instability that characterizes RSSIs measured in an indoor environment. The revised RSSIs are then used to estimate the distance. The proposed distance estimation framework uses one AP to measure the RSSIs, a Kalman filter to eliminate noise, and a log-distance path loss model to revise the measured RSSIs. In the experimental implementation of the framework, both a RSSI filter and a Kalman filter were respectively used for noise elimination to comparatively evaluate the performance of the latter for the specific application. The Kalman filter was found to reduce the accumulated errors by 8
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao
2018-01-01
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS
Directory of Open Access Journals (Sweden)
Y. Dehbi
2017-09-01
Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.