WorldWideScience

Sample records for denominator problem estimating

  1. Measuring HPV vaccination coverage in Australia: comparing two alternative population-based denominators.

    Science.gov (United States)

    Barbaro, Bianca; Brotherton, Julia M L

    2015-08-01

    To compare the use of two alternative population-based denominators in calculating HPV vaccine coverage in Australia by age groups, jurisdiction and remoteness areas. Data from the National HPV Vaccination Program Register (NHVPR) were analysed at Local Government Area (LGA) level, by state/territory and by the Australian Standard Geographical Classification Remoteness Structure. The proportion of females vaccinated was calculated using both the ABS ERP and Medicare enrolments as the denominator. HPV vaccine coverage estimates were slightly higher using Medicare enrolments than using the ABS estimated resident population nationally (70.8% compared with 70.4% for 12 to 17-year-old females, and 33.3% compared with 31.9% for 18 to 26-year-old females, respectively.) The greatest differences in coverage were found in the remote areas of Australia. There is minimal difference between coverage estimates made using the two denominators except in Remote and Very Remote areas where small residential populations make interpretation more difficult. Adoption of Medicare enrolments for the denominator in the ongoing program would make minimal, if any, difference to routine coverage estimates. © 2015 Public Health Association of Australia.

  2. Two denominators for one numerator: the example of neonatal mortality.

    Science.gov (United States)

    Harmon, Quaker E; Basso, Olga; Weinberg, Clarice R; Wilcox, Allen J

    2018-06-01

    Preterm delivery is one of the strongest predictors of neonatal mortality. A given exposure may increase neonatal mortality directly, or indirectly by increasing the risk of preterm birth. Efforts to assess these direct and indirect effects are complicated by the fact that neonatal mortality arises from two distinct denominators (i.e. two risk sets). One risk set comprises fetuses, susceptible to intrauterine pathologies (such as malformations or infection), which can result in neonatal death. The other risk set comprises live births, who (unlike fetuses) are susceptible to problems of immaturity and complications of delivery. In practice, fetal and neonatal sources of neonatal mortality cannot be separated-not only because of incomplete information, but because risks from both sources can act on the same newborn. We use simulations to assess the repercussions of this structural problem. We first construct a scenario in which fetal and neonatal factors contribute separately to neonatal mortality. We introduce an exposure that increases risk of preterm birth (and thus neonatal mortality) without affecting the two baseline sets of neonatal mortality risk. We then calculate the apparent gestational-age-specific mortality for exposed and unexposed newborns, using as the denominator either fetuses or live births at a given gestational age. If conditioning on gestational age successfully blocked the mediating effect of preterm delivery, then exposure would have no effect on gestational-age-specific risk. Instead, we find apparent exposure effects with either denominator. Except for prediction, neither denominator provides a meaningful way to define gestational-age-specific neonatal mortality.

  3. 31 CFR 309.3 - Denominations and exchange.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Denominations and exchange. 309.3... Denominations and exchange. Treasury bills will be issued in denominations (maturity value) of $10,000, $15,000, $50,000, $100,000, $500,000, and $1,000,000. Exchanges from higher to lower and lower to higher...

  4. 31 CFR 360.48 - Restrictions on reissue; denominational exchange.

    Science.gov (United States)

    2010-07-01

    ...; denominational exchange. 360.48 Section 360.48 Money and Finance: Treasury Regulations Relating to Money and... GOVERNING DEFINITIVE UNITED STATES SAVINGS BONDS, SERIES I Reissue and Denominational Exchange § 360.48 Restrictions on reissue; denominational exchange. Reissue is not permitted solely to change denominations. ...

  5. Definition and denomination of occupations in libraries

    Directory of Open Access Journals (Sweden)

    Jelka Gazvoda

    1998-01-01

    Full Text Available In the first part of the article, the author presents the modern definition of occupation as defined in the ISCO-88 standard, and consecutively in the Slovenian Standard Classification of Occupations; occupations in the field of library and information science are then placed in a wider frame of information occupations which are present in ali spheres of activities. The following part of the article is focused on information occupations in libraries, especially on their contents definitions and denominations.Based on the analysis of job descriptions in three Slovenian libraries (National and University Library, University Library of Maribor and Central Technical Library,the author came to the following conclusion: the existent practice in libraries shows that the contents and denominations of occupations in library and information jobs are defined too loosely. In most cases, the contents of occupation is defined by the contents of the job, while for its denomination the required educational title of the employee is often used. Therefore, the author proposes the establishment of a work force which would define the contents and design denominations to library and information occupations according to the principles contained in the Standard Classification of Occupations.

  6. Prospect evaluation as a function of numeracy and probability denominator.

    Science.gov (United States)

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Accounting and marketing: searching a common denominator

    Directory of Open Access Journals (Sweden)

    David S. Murphy

    2012-06-01

    Full Text Available Accounting and marketing are very different disciplines. The analysis of customer profitability is one concept that can unite accounting and marketing as a common denominator. In this article I search for common ground between accounting and marketing in the analysis of customer profitability to determine if a common denominator really exists between the two. This analysis focuses on accounting profitability, customer lifetime value, and customer equity. The article ends with a summary of what accountants can do to move the analysis of customer value forward, as an analytical tool, within companies.

  8. Information criteria to estimate hyperparameters in groundwater inverse problems

    Science.gov (United States)

    Zanini, A.; Tanda, M. G.; Woodbury, A. D.

    2017-12-01

    One of the main issues in groundwater modeling is the knowledge of the hydraulic parameters such as transmissivity and storativity. In literature there are several efficacious inverse methods that are able to estimate these unknown properties. Most methods assume, as a priori knowledge, the form of the variogram (or covariance function) of the unknown parameters. The hyperparameters of the variogram (or covariance function) can be inferred from observations, assumed known or estimated. Information criteria are widely used in inverse problems in several disciplines (such as geophysics, hydrology, ...) to estimate the hyperparameters. In this work, in order to estimate the hyperparameters, we consider the Akaike Information Criterion (AIC) and the Akaike Bayesian Information Criterion (ABIC). AIC is computed as -2 ln[fitted model]+2 number of unknown parameters. The iterative procedure allows to identify the hyperparameters that minimize the AIC. The ABIC is similar to the AIC in form and is computed in terms of the Bayesian likelihood; it is appropriate when prior information is considered in the form of prior probability. ABIC = -2 ln[predictive distribution]+2 (number of hyperparameters). The predictive distribution is the normalizing constant that is at the denominator of the Bayes theorem and represents the pdf of observing the data with the uncertainty in the model parameters marginalized out of consideration. The correct hyperparameters are evaluated at the minimum value of the ABIC. In this work we compare the results obtained from AIC to ABIC, using a literature example and we describe pros and cons of the two approaches.

  9. Wine quality, reputation, denominations: How cooperatives and private wineries compete?

    Directory of Open Access Journals (Sweden)

    Schamel Guenter H.

    2014-01-01

    Full Text Available We analyze how cooperatives in Northern Italy (Alto Adige and Trentino compete with private wineries regarding product quality and reputation, i.e. if firm organization affects wine quality and winery reputation. Moreover, we examine if cooperatives with deep roots in their local economy specialize in specific regional denomination rules (i.e. DOC, IGT. Compared to private wineries, cooperatives face additional challenges in order to raise wine quality, among them appropriate incentives that induce individual growers to supply high quality grapes (e.g. vineyard management and grape pricing schemes to lower yields. The quality reputation of a winery with consumers depends crucially on its winemaking skills. Wine regions differ with respect to climatic conditions and quality denomination rules. Assuming similar climatic conditions within wine regions as well as winemaking skills between firms, incentive schemes to induce individual growers to supply high quality grapes and quality denomination rules remain crucial determinants of wine quality and winery reputation when comparing different regions and firm organizational forms. The data set analyzed allows differentiating local cooperatives vs. private wineries and denotes retail prices, wine quality evaluations, indicators for winery reputation, and distinct denomination rules. We employ a hedonic pricing model in order to test the following hypothesis: First, wines produced by cooperatives suffer a significant reputation and/or wine quality discount relative to wines from private producers. Second, cooperatives and/or private wineries specialize in specific wine denominations for which they receive a price premium relative the competing organizational form. Our results are mixed. However, we reject the hypothesis that cooperatives suffer a reputation/wine quality discount relative to private producers for the Alto Adige wine region. Moreover, we find that regional cooperatives and private

  10. Are Improvements in Measured Performance Driven by Better Treatment or "Denominator Management"?

    Science.gov (United States)

    Harris, Alex H S; Chen, Cheng; Rubinsky, Anna D; Hoggatt, Katherine J; Neuman, Matthew; Vanneman, Megan E

    2016-04-01

    Process measures of healthcare quality are usually formulated as the number of patients who receive evidence-based treatment (numerator) divided by the number of patients in the target population (denominator). When the systems being evaluated can influence which patients are included in the denominator, it is reasonable to wonder if improvements in measured quality are driven by expanding numerators or contracting denominators. In 2003, the US Department of Veteran Affairs (VA) based executive compensation in part on performance on a substance use disorder (SUD) continuity-of-care quality measure. The first goal of this study was to evaluate if implementing the measure in this way resulted in expected improvements in measured performance. The second goal was to examine if the proportion of patients with SUD who qualified for the denominator contracted after the quality measure was implemented, and to describe the facility-level variation in and correlates of denominator contraction or expansion. Using 40 quarters of data straddling the implementation of the performance measure, an interrupted time series design was used to evaluate changes in two outcomes. All veterans with an SUD diagnosis in all VA facilities from fiscal year 2000 to 2009. The two outcomes were 1) measured performance-patients retained/patients qualified and 2) denominator prevalence-patients qualified/patients with SUD program contact. Measured performance improved over time (P management, and also the exploration of "shadow measures" to monitor and reduce undesirable denominator management.

  11. Using tactile features to help functionally blind individuals denominate banknotes.

    Science.gov (United States)

    Lederman, Susan J; Hamilton, Cheryl

    2002-01-01

    This study, which was conducted for the Bank of Canada, assessed the feasibility of presenting a raised texture feature together with a tactile denomination code on the next Canadian banknote series ($5, $10, $20, $50, and $100). Adding information accessible by hand would permit functionally blind individuals to independently denominate banknotes. In Experiment 1, 20 blindfolded, sighted university students denominated a set of 8 alternate tactile feature designs. Across the 8 design series, the proportion of correct responses never fell below .97; the mean response time per banknote ranged from 11.4 to 13.1 s. In Experiment 2, 27 functionally blind participants denominated 4 of the previous 8 candidate sets of banknotes. The proportion of correct responses never fell below .92; the corresponding mean response time per banknote ranged from 11.7 to 13.0 s. The Bank of Canada selected one of the four raised-texture designs for inclusion on its new banknote series. Other potential applications include designing haptic displays for teleoperation and virtual environment systems.

  12. Complex leadership as a way forward for transformational missional leadership in a denominational structure

    Directory of Open Access Journals (Sweden)

    C.J.P. (Nelus Niemandt

    2015-08-01

    Full Text Available The research investigates the role of leadership in the transformation of denominational structures towards a missional ecclesiology, and focusses on the Highveld Synod of the Dutch Reformed Church. It describes the missional journey of the denomination, and interprets the transformation. The theory of ‘complex leadership’ in complex systems is applied to the investigation of the impact of leadership on a denominational structure. The theory identifies three mechanisms used by leaders as enablers in emergent, self-organisation systems: (1 Leaders disrupt existing patterns, (2 they encourage novelty, and (3 they act as sensemakers. These insights are applied as a tool to interpret the missional transformation of a denomination.

  13. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  14. Defining risk groups to yellow fever vaccine-associated viscerotropic disease in the absence of denominator data.

    Science.gov (United States)

    Seligman, Stephen J; Cohen, Joel E; Itan, Yuval; Casanova, Jean-Laurent; Pezzullo, John C

    2014-02-01

    Several risk groups are known for the rare but serious, frequently fatal, viscerotropic reactions following live yellow fever virus vaccine (YEL-AVD). Establishing additional risk groups is hampered by ignorance of the numbers of vaccinees in factor-specific risk groups thus preventing their use as denominators in odds ratios (ORs). Here, we use an equation to calculate ORs using the prevalence of the factor-specific risk group in the population who remain well. The 95% confidence limits and P values can also be calculated. Moreover, if the estimate of the prevalence is imprecise, discrimination analysis can indicate the prevalence at which the confidence interval results in an OR of ∼1 revealing if the prevalence might be higher without yielding a non-significant result. These methods confirm some potential risk groups for YEL-AVD and cast doubt on another. They should prove useful in situations in which factor-specific risk group denominator data are not available.

  15. Denominator function for canonical SU(3) tensor operators

    International Nuclear Information System (INIS)

    Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.

    1985-01-01

    The definition of a canonical unit SU(3) tensor operator is given in terms of its characteristic null space as determined by group-theoretic properties of the intertwining number. This definition is shown to imply the canonical splitting conditions used in earlier work for the explicit and unique (up to +- phases) construction of all SU(3) WCG coefficients (Wigner--Clebsch--Gordan). Using this construction, an explicit SU(3)-invariant denominator function characterizing completely the canonically defined WCG coefficients is obtained. It is shown that this denominator function (squared) is a product of linear factors which may be obtained explicitly from the characteristic null space times a ratio of polynomials. These polynomials, denoted G/sup t//sub q/, are defined over three (shift) parameters and three barycentric coordinates. The properties of these polynomials (hence, of the corresponding invariant denominator function) are developed in detail: These include a derivation of their degree, symmetries, and zeros. The symmetries are those induced on the shift parameters and barycentric coordinates by the transformations of a 3 x 3 array under row interchange, column interchange, and transposition (the group of 72 operations leaving a 3 x 3 determinant invariant). Remarkably, the zeros of the general G/sup t//sub q/ polynomial are in position and multiplicity exactly those of the SU(3) weight space associated with irreducible representation [q-1,t-1,0]. The results obtained are an essential step in the derivation of a fully explicit and comprehensible algebraic expression for all SU(3) WCG coefficients

  16. Gaming and Religion: The Impact of Spirituality and Denomination.

    Science.gov (United States)

    Braun, Birgit; Kornhuber, Johannes; Lenz, Bernd

    2016-08-01

    A previous investigation from Korea indicated that religion might modulate gaming behavior (Kim and Kim in J Korean Acad Nurs 40:378-388, 2010). Our present study aimed to investigate whether a belief in God, practicing religious behavior and religious denomination affected gaming behavior. Data were derived from a Western cohort of young men (Cohort Study on Substance Use Risk Factors, n = 5990). The results showed that a stronger belief in God was associated with lower gaming frequency and smaller game addiction scale scores. In addition, practicing religiosity was related to less frequent online and offline gaming. Finally, Christians gamed less frequently and had lower game addiction scale scores than subjects without religious denomination. In the future, these results could prove useful in developing preventive and therapeutic strategies for the Internet gaming disorder.

  17. No common denominator: a review of outcome measures in IVF RCTs.

    Science.gov (United States)

    Wilkinson, Jack; Roberts, Stephen A; Showell, Marian; Brison, Daniel R; Vail, Andy

    2016-12-01

    Which outcome measures are reported in RCTs for IVF? Many combinations of numerator and denominator are in use, and are often employed in a manner that compromises the validity of the study. The choice of numerator and denominator governs the meaning, relevance and statistical integrity of a study's results. RCTs only provide reliable evidence when outcomes are assessed in the cohort of randomised participants, rather than in the subgroup of patients who completed treatment. Review of outcome measures reported in 142 IVF RCTs published in 2013 or 2014. Trials were identified by searching the Cochrane Gynaecology and Fertility Specialised Register. English-language publications of RCTs reporting clinical or preclinical outcomes in peer-reviewed journals in the period 1 January 2013 to 31 December 2014 were eligible. Reported numerators and denominators were extracted. Where they were reported, we checked to see if live birth rates were calculated correctly using the entire randomised cohort or a later denominator. Over 800 combinations of numerator and denominator were identified (613 in no more than one study). No single outcome measure appeared in the majority of trials. Only 22 (43%) studies reporting live birth presented a calculation including all randomised participants or only excluding protocol violators. A variety of definitions were used for key clinical numerators: for example, a consensus regarding what should constitute an ongoing pregnancy does not appear to exist at present. Several of the included articles may have been secondary publications. Our categorisation scheme was essentially arbitrary, so the frequencies we present should be interpreted with this in mind. The analysis of live birth denominators was post hoc. There is massive diversity in numerator and denominator selection in IVF trials due to its multistage nature, and this causes methodological frailty in the evidence base. The twin spectres of outcome reporting bias and analysis of non

  18. Audit of preventive activities in 16 inner London practices using a validated measure of patient population, the 'active patient' denominator. Healthy Eastenders Project.

    Science.gov (United States)

    Robson, J; Falshaw, M

    1995-01-01

    the practice computer. In contrast, 82% of recorded cervical smears were recorded on computer. CONCLUSION. The active patient denominator produces a more accurate estimate of population coverage and professional activity, both of which are underestimated by the complete, unexpurgated practice register. A standard definition of the denominator also allows comparisons to be made between practices and over time. As only half of the recordings of some preventive activities were recorded on computer, it is doubtful whether it is advisable to rely on computers for audit where paper records are also maintained. PMID:7546868

  19. An Alternative Route to Teaching Fraction Division: Abstraction of Common Denominator Algorithm

    Directory of Open Access Journals (Sweden)

    İsmail Özgür ZEMBAT

    2015-06-01

    Full Text Available From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting curriculum for fraction division from use of invert and multiply algorithm as a basis to the use of common denominator algorithm as a basis. This was accomplished with the analysis of learning of two prospective elementary teachers being an illustration of how to realize those conceptual affordances. In doing so, the article proposes an instructional sequence and details it by referring to both the (mathematical and pedagogical advantages and the disadvantages. As a result, this algorithm has a conceptual basis depending on basic operations of partitioning, unitizing, and counting, which make it accessible to learners. Also, when participants are encouraged to construct this algorithm based on their work with diagrams, common denominator algorithm formalizes the work that they do with diagrams.

  20. An alternative route to teaching fraction division: Abstraction of common denominator algorithm

    Directory of Open Access Journals (Sweden)

    İsmail Özgür Zembat

    2015-07-01

    Full Text Available From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting curriculum for fraction division from use of invert and multiply algorithm as a basis to the use of common denominator algorithm as a basis. This was accomplished with the analysis of learning of two prospective elementary teachers being an illustration of how to realize those conceptual affordances. In doing so, the article proposes an instructional sequence and details it by referring to both the (mathematical and pedagogical advantages and the disadvantages. As a result, this algorithm has a conceptual basis depending on basic operations of partitioning, unitizing, and counting, which make it accessible to learners. Also, when participants are encouraged to construct this algorithm based on their work with diagrams, common denominator algorithm formalizes the work that they do with diagrams.

  1. The Contrastive Study of Igbo and English Denominal Nouns ...

    African Journals Online (AJOL)

    The teaching of nominalization has not been all smooth for an Igbo second language learner of English language. That is why this study is set to contrast English and Igbo Denominal nouns. The objective is to find out the similarities and differences between the nominalization process in Igbo and that of the English ...

  2. Variable effects of prevalence correction of population denominators on differentials in myocardial infarction incidence: a record linkage study in Aboriginal and non-Aboriginal Western Australians.

    Science.gov (United States)

    Katzenellenbogen, Judith M; Sanfilippo, Frank M; Hobbs, Michael S T; Briffa, Tom G; Ridout, Steve C; Knuiman, Matthew W; Dimer, Lyn; Taylor, Kate P; Thompson, Peter L; Thompson, Sandra C

    2011-06-01

    To investigate the impact of prevalence correction of population denominators on myocardial infarction (MI) incidence rates, rate ratios, and rate differences in Aboriginal vs. non-Aboriginal Western Australians aged 25-74 years during the study period 2000-2004. Person-based linked hospital and mortality data sets were used to estimate the number of prevalent and first-ever MI cases each year from 2000 to 2004 using a 15-year look-back period. Age-specific and -standardized MI incidence rates were calculated using both prevalence-corrected and -uncorrected population denominators, by sex and Aboriginality. The impact of prevalence correction on rates increased with age, was higher for men than women, and substantially greater for Aboriginal than non-Aboriginal people. Despite the systematic underestimation of incidence, prevalence correction had little impact on the Aboriginal to non-Aboriginal age-standardized rate ratios (6% and 4% underestimate in men and women, respectively), although the impact on rate differences was more marked (12% and 6%, respectively). The percentage underestimate of differentials was greater at older ages. Prevalence correction of denominators, while more accurate, is difficult to apply and may add modestly to the quantification of relative disparities in MI incidence between populations. Absolute incidence disparities using uncorrected denominators may have an error >10%. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. The Effects of Denomination on Religious Socialization for Jewish Youth

    Science.gov (United States)

    James, Anthony G.; Lester, Ashlie M.; Brooks, Greg

    2014-01-01

    The transmission model of religious socialization was tested using a sample of American Jewish parents and adolescents. The authors expected that measures of religiousness among parents would be associated with those among their children. Interaction effects of denominational membership were also tested. Data were collected from a sample of 233…

  4. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  5. An Entropic Estimator for Linear Inverse Problems

    Directory of Open Access Journals (Sweden)

    Amos Golan

    2012-05-01

    Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.

  6. An Alternative Route to Teaching Fraction Division: Abstraction of Common Denominator Algorithm

    Science.gov (United States)

    Zembat, Ismail Özgür

    2015-01-01

    From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting…

  7. Currency Denomination of Bank Loans : Evidence from Small Firms in Transition Countries

    NARCIS (Netherlands)

    Brown, M.; Ongena, S.; Yesin, P.

    2008-01-01

    We examine the firm-level and country-level determinants of the currency denomination of small business loans. We introduce an information asymmetry between banks and firms in a model that also features the trade-off between the cost of debt and firm-level distress costs. Banks in our model don’t

  8. The European Convention on Human Rights & Parental Rights in Relation to Denominational Schooling

    NARCIS (Netherlands)

    J.D. Temperman (Jeroen)

    2017-01-01

    textabstractWhereas the bulk of religious education cases concerns aspects of the public school framework and curriculum, this article explores Convention rights in the realm of denominational schooling. It is outlined that the jurisprudence of the Strasbourg Court generally strongly supports the

  9. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  10. Canonical resolution of the multiplicity problem for U(3): an explicit and complete constructive solution

    International Nuclear Information System (INIS)

    Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.

    1975-01-01

    The multiplicity problem for tensor operators in U(3) has a unique (canonical) resolution which is utilized to effect the explicit construction of all U(3) Wigner and Racah coefficients. Methods are employed which elucidate the structure of the results; in particular, the significance of the denominator functions entering the structure of these coefficients, and the relation of these denominator functions to the null space of the canonical tensor operators. An interesting feature of the denominator functions is the appearance of new, group theoretical, polynomials exhibiting several remarkable and quite unexpected properties. (U.S.)

  11. The solar neutrino problem

    Indian Academy of Sciences (India)

    to a research problem that now commands the attention of a large number of physicists ... the first comparison between theory and experiment was made. .... prior probability assigned to hypothesis А. The integration in the denominator is .... The key feature of figure 5, which is well known, is the marked reduction in the Be.

  12. The Professionalisation of Non-Denominational Religious Education in England: Politics, Organisation and Knowledge

    Science.gov (United States)

    Parker, Stephen G.; Freathy, Rob; Doney, Jonathan

    2016-01-01

    In response to contemporary concerns, and using neglected primary sources, this article explores the professionalisation of teachers of Religious Education (RI/RE) in non-denominational, state-maintained schools in England. It does so from the launch of "Religion in Education" (1934) and the Institute for Christian Education at Home and…

  13. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  14. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  15. Parental Rights in Relation to Denominational Schooling under the European Convention on Human Rights

    NARCIS (Netherlands)

    J.D. Temperman (Jeroen)

    2017-01-01

    textabstractWhereas the bulk of Article 2 Protocol I cases concerns aspects of the public-school framework and curriculum, this article explores Convention rights in the realm of denominational schooling. It is outlined that the jurisprudence of the Strasbourg Court generally strongly supports the

  16. Price Setting Transactions and the Role of Denominating Currency in FX Markets

    OpenAIRE

    Friberg, Richard; Wilander, Fredrik

    2007-01-01

    This report, commissioned by Sveriges Riksbank, examines the role of currency denomination in international trade transactions. It is divided in two parts. The first part consists of a survey of the price setting and payment practices of a large sample of Swedish exporting firms. The second part analyzes payments data from the Swedish settlement reports from 1999-2002. We examine whether invoicing patterns of Swedish and European companies changed following the creation of the EMU and how the...

  17. And who is your neighbor? Explaining denominational differences in charitable giving and volunteering in the Netherlands

    NARCIS (Netherlands)

    Bekkers, René; Schuyt, Theo

    We study differences in contributions of time and money to churches and non-religious nonprofit organizations between members of different religious denominations in the Netherlands. We hypothesize that contributions to religious organizations are based on involvement in the religious community,

  18. Support for Homosexuals' Civil Liberties: The Influence of Familial Gender Role Attitudes across Religious Denominations

    Science.gov (United States)

    Kenneavy, Kristin

    2012-01-01

    Religious denominations vary in both their approach to the roles that men and women play in familial contexts, as well as their approach to homosexuality. This research investigates whether gender attitudes, informed by religious tradition, predict a person's support for civil liberties extended to gays and lesbians. Using data from the 1996 and…

  19. Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming

    KAUST Repository

    Claudel, Christian G.; Chamoin, Timothee; Bayen, Alexandre M.

    2014-01-01

    This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.

  20. Estimation of G-renewal process parameters as an ill-posed inverse problem

    International Nuclear Information System (INIS)

    Krivtsov, V.; Yevkin, O.

    2013-01-01

    Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

  1. 29 CFR 4211.4 - Contributions for purposes of the numerator and denominator of the allocation fractions.

    Science.gov (United States)

    2010-07-01

    ... of the allocation fractions. 4211.4 Section 4211.4 Labor Regulations Relating to Labor (Continued... denominator of the allocation fractions. Each of the allocation fractions used in the presumptive, modified... five-year period. (a) The numerator of the allocation fraction, with respect to a withdrawing employer...

  2. Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems

    Directory of Open Access Journals (Sweden)

    Oluyemi Oyeniran

    2016-01-01

    Full Text Available The problem of estimating the proportion, π0, of the true null hypotheses in a multiple testing problem is important in cases where large scale parallel hypotheses tests are performed independently. While the problem is a quantity of interest in its own right in applications, the estimate of π0 can be used for assessing or controlling an overall false discovery rate. In this article, we develop an innovative nonparametric maximum likelihood approach to estimate π0. The nonparametric likelihood is proposed to be restricted to multinomial models and an EM algorithm is also developed to approximate the estimate of π0. Simulation studies show that the proposed method outperforms other existing methods. Using experimental microarray datasets, we demonstrate that the new method provides satisfactory estimate in practice.

  3. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    KAUST Repository

    Dashti, M.

    2013-09-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.

  4. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    International Nuclear Information System (INIS)

    Dashti, M; Law, K J H; Stuart, A M; Voss, J

    2013-01-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map G applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ 0 . We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μ y . Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager–Machlup functional defined on the Cameron–Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of G(u) can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier–Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. (paper)

  5. Carleman estimates and applications to inverse problems for hyperbolic systems

    CERN Document Server

    Bellassoued, Mourad

    2017-01-01

    This book is a self-contained account of the method based on Carleman estimates for inverse problems of determining spatially varying functions of differential equations of the hyperbolic type by non-overdetermining data of solutions. The formulation is different from that of Dirichlet-to-Neumann maps and can often prove the global uniqueness and Lipschitz stability even with a single measurement. These types of inverse problems include coefficient inverse problems of determining physical parameters in inhomogeneous media that appear in many applications related to electromagnetism, elasticity, and related phenomena. Although the methodology was created in 1981 by Bukhgeim and Klibanov, its comprehensive development has been accomplished only recently. In spite of the wide applicability of the method, there are few monographs focusing on combined accounts of Carleman estimates and applications to inverse problems. The aim in this book is to fill that gap. The basic tool is Carleman estimates, the theory of wh...

  6. The impact of exchange rate EUR/USD on the rate of return of bond investments denominated in US dollar from the point of view of euro investor

    Directory of Open Access Journals (Sweden)

    Oldřich Šoba

    2009-01-01

    Full Text Available Investment opportunities into foreign curruncies financial assets are rising because of financial markets globalization, financial markets integration and evolution of modern information technologies. The currency risk relates to these cases when investor converts cash from and into domestic currency. The currency risk is determined by unexcepeted change of exchange rate (currency of financial asset denomination / investor’s domestic currency during duration of the investment.Objective of the paper is quantification and analysis of exchange rate EUR/USD impact on the rate of return of bond investments denominated in US dollar from the point of view of a euro investor for investment horizons of different length.The analysis is realized for following investment horizons: 1 year, 2 years, 3 years, 5 years, 7 years, 10 year and 12 year. Complementary investment horizons are: month and 15 year. Bond investments denominated just US dollar are represented by investments into ING bond unit trust in period December 1989–December 2007. The unit trust invests into bonds with high rating (for example governmants bonds etc.. These bonds are denominated in USD only. Methodology of the analysis is based on quantification of proportion of exchange rate EUR/USD impact on the rate of return of bond investment denominated in USD. The share is based on basic piece of knowledge of the uncovered interest rate parity.

  7. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    Science.gov (United States)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  8. Bounds and estimates for the linearly perturbed eigenvalue problem

    International Nuclear Information System (INIS)

    Raddatz, W.D.

    1983-01-01

    This thesis considers the problem of bounding and estimating the discrete portion of the spectrum of a linearly perturbed self-adjoint operator, M(x). It is supposed that one knows an incomplete set of data consisting in the first few coefficients of the Taylor series expansions of one or more of the eigenvalues of M(x) about x = 0. The foundations of the variational study of eigen-values are first presented. These are then used to construct the best possible upper bounds and estimates using various sets of given information. Lower bounds are obtained by estimating the error in the upper bounds. The extension of these bounds and estimates to the eigenvalues of the doubly-perturbed operator M(x,y) is discussed. The results presented have numerous practical application in the physical sciences, including problems in atomic physics and the theory of vibrations of acoustical and mechanical systems

  9. The joint estimation of term structures and credit spreads

    NARCIS (Netherlands)

    Houweling, P.; Hoek, J.; Kleibergen, F.R.

    1999-01-01

    We present a new framework for the joint estimation of the default-free government term structure and corporate credit spread curves. By using a data set of liquid, German mark denominated bonds, we show that this yields more realistic spreads than traditionally obtained spread curves that result

  10. Regularization and error estimates for nonhomogeneous backward heat problems

    Directory of Open Access Journals (Sweden)

    Duc Trong Dang

    2006-01-01

    Full Text Available In this article, we study the inverse time problem for the non-homogeneous heat equation which is a severely ill-posed problem. We regularize this problem using the quasi-reversibility method and then obtain error estimates on the approximate solutions. Solutions are calculated by the contraction principle and shown in numerical experiments. We obtain also rates of convergence to the exact solution.

  11. Initiative for international cooperation of researchers and breeders related to determination and denomination of cucurbit powdery mildew races

    Science.gov (United States)

    Cucurbit powdery mildew (CPM) is caused most frequently by two obligate erysiphaceous ectoparasites, Golovinomyces orontii s.l. and Podosphaera xanthii, that are highly variable in virulence. Various independent systems of CPM race determination and denomination cause a chaotic situation in cucurbit...

  12. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  13. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  14. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Science.gov (United States)

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  15. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    Science.gov (United States)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  16. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  17. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  18. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey; Bierig, Claudio

    2014-01-01

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  19. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    Energy Technology Data Exchange (ETDEWEB)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl

    2016-09-15

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  20. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  1. Estimation of the thermal properties in alloys as an inverse problem

    International Nuclear Information System (INIS)

    Zueco, J.; Alhama, F.

    2005-01-01

    This paper provides an efficient numerical method for estimating the thermal conductivity and heat capacity of alloys, as a function of the temperature, starting from temperature measurements (including errors) in heating and cooling processes. The proposed procedure is a modification of the known function estimation technique, typical of the inverse problem field, in conjunction with the network simulation method (already checked in many non-lineal problems) as the numerical tool. Estimations only require a point of measurement. The methodology is applied for determining these thermal properties in alloys within ranges of temperature where allotropic changes take place. These changes are characterized by sharp temperature dependencies. (Author) 13 refs

  2. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  3. Transport-constrained extensions of collision and track length estimators for solutions of radiative transport problems

    International Nuclear Information System (INIS)

    Kong, Rong; Spanier, Jerome

    2013-01-01

    In this paper we develop novel extensions of collision and track length estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems

  4. Asymptotic Estimates and Qualitatives Properties of an Elliptic Problem in Dimension Two

    OpenAIRE

    Mehdi, Khalil El; Grossi, Massimo

    2003-01-01

    In this paper we study a semilinear elliptic problem on a bounded domain in $\\R^2$ with large exponent in the nonlinear term. We consider positive solutions obtained by minimizing suitable functionals. We prove some asymtotic estimates which enable us to associate a "limit problem" to the initial one. Usong these estimates we prove some quantitative properties of the solution, namely characterization of level sets and nondegeneracy.

  5. Denominative Variation in the Terminology of Fauna and Flora: Cultural and Linguistic (ASymmetries

    Directory of Open Access Journals (Sweden)

    Sabrina de Cássia Martins

    2018-05-01

    Full Text Available The present work approaches the denominative variation in Terminology. In this way, it has as object of study the specialized lexical units in Portuguese language formed by at least one of the following color names: black, white, yellow, blue, orange, gray, green, brown, red, pink, violet, purple and indigo. The comparative analysis of this vocabulary among Portuguese, English and Italian languages was conducted considering two sub-areas of Biology: Botany, specifically Angiosperms (Monocotyledons and Eudicotyledons, and Zoology, exclusively Vertebrates (fish, amphibians, reptiles, birds and mammals. It will be described in the next pages how common names are created in these tree languages.

  6. Empirical Estimates in Economic and Financial Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Houda, Michal; Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 29 (2012), s. 50-69 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/11/0150; GA ČR GAP402/10/0956 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * empirical estimates * moment generating functions * stability * Wasserstein metric * L1-norm * Lipschitz property * consistence * convergence rate * normal distribution * Pareto distribution * Weibull distribution * distribution tails * simulation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/houda-empirical estimates in economic and financial optimization problems.pdf

  7. The Impact of Denominational Affiliation on Organizational Sense of Belonging and Commitment of Adjunct Faculty at Bible Colleges and Universities

    Science.gov (United States)

    Pilieci, Kimberly M.

    2016-01-01

    The majority of faculty in higher education, including secular and biblical institutions, are adjunct faculty. The literature suggests that adjunct faculty are less effective and satisfied, and have weaker organizational sense of belonging (OSB) and affective organizational commitment (AOC). Denominational affiliation (DA) and religious commitment…

  8. Caring for patients of Islamic denomination: Critical care nurses' experiences in Saudi Arabia.

    Science.gov (United States)

    Halligan, Phil

    2006-12-01

    To describe the critical care nurses' experiences in caring for patients of Muslim denomination in Saudi Arabia. Caring is known to be the essence of nursing but many health-care settings have become more culturally diverse. Caring has been examined mainly in the context of Western cultures. Muslims form one of the largest ethnic minority communities in Britain but to date, empirical studies relating to caring from an Islamic perspective is not well documented. Research conducted within the home of Islam would provide essential truths about the reality of caring for Muslim patients. Phenomenological descriptive. Methods. Six critical care nurses were interviewed from a hospital in Saudi Arabia. The narratives were analysed using Colaizzi's framework. The meaning of the nurses' experiences emerged as three themes: family and kinship ties, cultural and religious influences and nurse-patient relationship. The results indicated the importance of the role of the family and religion in providing care. In the process of caring, the participants felt stressed and frustrated and they all experienced emotional labour. Communicating with the patients and the families was a constant battle and this acted as a further stressor in meeting the needs of their patients. The concept of the family and the importance and meaning of religion and culture were central in the provision of caring. The beliefs and practices of patients who follow Islam, as perceived by expatriate nurses, may have an effect on the patient's health care in ways that are not apparent to many health-care professionals and policy makers internationally. Readers should be prompted to reflect on their clinical practice and to understand the impact of religious and cultural differences in their encounters with patients of Islam denomination. Policy and all actions, decisions and judgments should be culturally derived.

  9. "Faith of Our Fathers" -- Lesbian, Gay and Bisexual Teachers' Attitudes towards the Teaching of Religion in Irish Denominational Primary Schools

    Science.gov (United States)

    Fahie, Declan

    2017-01-01

    Owing to a variety of complex historical and socio-cultural factors, the Irish education system remains heavily influenced by denominational mores and values [Ferriter, D. 2012. "Occasions of Sin: Sex & Society in Modern Ireland." London: Profile Books], particularly those of the Roman Catholic Church [O'Toole, B. 2015.…

  10. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    Science.gov (United States)

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  11. Estimating incidence of problem drug use using the Horwitz-Thompson estimator - A new approach applied to people who inject drugs in Oslo 1985-2008.

    Science.gov (United States)

    Amundsen, Ellen J; Bretteville-Jensen, Anne L; Kraus, Ludwig

    2016-01-01

    The trend in the number of new problem drug users per year (incidence) is the most important measure for studying the diffusion of problem drug use. Due to sparse data sources and complicated statistical models, estimation of incidence of problem drug use is challenging. The aim of this study is to widen the palette of available methods and data types for estimating incidence of problem drug use over time, and for identifying the trends. This study presents a new method of incidence estimation, applied to people who inject drugs (PWID) in Oslo. The method took into account the transition between different phases of drug use progression - active use, temporary cessation, and permanent cessation. The Horwitz-Thompson estimator was applied. Data included 16 cross-sectional samples of problem drug users who reported their onset of injecting drug use. We explored variation in results for selected probable scenarios of parameter variation for disease progression, as well as the stability of the results based on fewer years of cross-sectional samples. The method yielded incidence estimates of problem drug use, over time. When applied to people in Oslo who inject drugs, we found a significant reduction of incidence of 63% from 1985 to 2008. This downward trend was also present when the estimates were based on fewer surveys (five) and in the results of sensitivity analysis for likely scenarios of disease progression. This new method, which incorporates temporarily inactive problem drug users, may become a useful tool for estimating the incidence of problem drug use over time. The method may be less data intensive than other methods based on first entry to treatment and may be generalized to other groups of substance users. Further studies on drug use progression would improve the validity of the results. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Estimates for mild solutions to semilinear Cauchy problems

    Directory of Open Access Journals (Sweden)

    Kresimir Burazin

    2014-09-01

    Full Text Available The existence (and uniqueness results on mild solutions of the abstract semilinear Cauchy problems in Banach spaces are well known. Following the results of Tartar (2008 and Burazin (2008 in the case of decoupled hyperbolic systems, we give an alternative proof, which enables us to derive an estimate on the mild solution and its time of existence. The nonlinear term in the equation is allowed to be time-dependent. We discuss the optimality of the derived estimate by testing it on three examples: the linear heat equation, the semilinear heat equation that models dynamic deflection of an elastic membrane, and the semilinear Schrodinger equation with time-dependent nonlinearity, that appear in the modelling of numerous physical phenomena.

  13. Using Supervised Deep Learning for Human Age Estimation Problem

    Science.gov (United States)

    Drobnyh, K. A.; Polovinkin, A. N.

    2017-05-01

    Automatic facial age estimation is a challenging task upcoming in recent years. In this paper, we propose using the supervised deep learning features to improve an accuracy of the existing age estimation algorithms. There are many approaches solving the problem, an active appearance model and the bio-inspired features are two of them which showed the best accuracy. For experiments we chose popular publicly available FG-NET database, which contains 1002 images with a broad variety of light, pose, and expression. LOPO (leave-one-person-out) method was used to estimate the accuracy. Experiments demonstrated that adding supervised deep learning features has improved accuracy for some basic models. For example, adding the features to an active appearance model gave the 4% gain (the error decreased from 4.59 to 4.41).

  14. Global gradient estimates for divergence-type elliptic problems involving general nonlinear operators

    Science.gov (United States)

    Cho, Yumi

    2018-05-01

    We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.

  15. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  16. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  17. Estimates for lower order eigenvalues of a clamped plate problem

    OpenAIRE

    Cheng, Qing-Ming; Huang, Guangyue; Wei, Guoxin

    2009-01-01

    For a bounded domain $\\Omega$ in a complete Riemannian manifold $M^n$, we study estimates for lower order eigenvalues of a clamped plate problem. We obtain universal inequalities for lower order eigenvalues. We would like to remark that our results are sharp.

  18. Social Deprivation, Community Cohesion, Denominational Education and Freedom of Choice: A Marxist Perspective on Poverty and Exclusion in the District of Thanet

    Science.gov (United States)

    Welsh, Paul J.

    2008-01-01

    Thanet suffers from severe deprivation, mainly driven by socio-economic factors. Efforts to remediate this through economic regeneration plans have largely been unsuccessful, while a combination of selective and denominational education creates and maintains a gradient of disadvantage that mainly impacts upon already-deprived young people. Some of…

  19. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    Science.gov (United States)

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  20. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  1. Upper estimates of complexity of algorithms for multi-peg Tower of Hanoi problem

    Directory of Open Access Journals (Sweden)

    Sergey Novikov

    2007-06-01

    Full Text Available There are proved upper explicit estimates of complexity of lgorithms: for multi-peg Tower of Hanoi problem with the limited number of disks, for Reve's puzzle and for $5$-peg Tower of Hanoi problem with the free number of disks.

  2. Genome size estimation: a new methodology

    Science.gov (United States)

    Álvarez-Borrego, Josué; Gallardo-Escárate, Crisitian; Kober, Vitaly; López-Bonilla, Oscar

    2007-03-01

    Recently, within the cytogenetic analysis, the evolutionary relations implied in the content of nuclear DNA in plants and animals have received a great attention. The first detailed measurements of the nuclear DNA content were made in the early 40's, several years before Watson and Crick proposed the molecular structure of the DNA. In the following years Hewson Swift developed the concept of "C-value" in reference to the haploid phase of DNA in plants. Later Mirsky and Ris carried out the first systematic study of genomic size in animals, including representatives of the five super classes of vertebrates as well as of some invertebrates. From these preliminary results it became evident that the DNA content varies enormously between the species and that this variation does not bear relation to the intuitive notion from the complexity of the organism. Later, this observation was reaffirmed in the following years as the studies increased on genomic size, thus denominating to this characteristic of the organisms like the "Paradox of the C-value". Few years later along with the no-codification discovery of DNA the paradox was solved, nevertheless, numerous questions remain until nowadays unfinished, taking to denominate this type of studies like the "C-value enigma". In this study, we reported a new method for genome size estimation by quantification of fluorescence fading. We measured the fluorescence intensity each 1600 milliseconds in DAPI-stained nuclei. The estimation of the area under the graph (integral fading) during fading period was related with the genome size.

  3. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    Science.gov (United States)

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  4. Boundary-value problems with integral conditions for a system of Lame equations in the space of almost periodic functions

    Directory of Open Access Journals (Sweden)

    Volodymyr S. Il'kiv

    2016-11-01

    Full Text Available We study a problem with integral boundary conditions in the time coordinate for a system of Lame equations of dynamic elasticity theory of an arbitrary dimension. We find necessary and sufficient conditions for the existence and uniqueness of solution in the class of almost periodic functions in the spatial variables. To solve the problem of small denominators arising while constructing solutions, we use the metric approach.

  5. Cost Estimates and Investment Decisions

    International Nuclear Information System (INIS)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  6. Between generational denominations: what the academic narratives teach us about digital children and youth

    Directory of Open Access Journals (Sweden)

    Sandro Faccin Bortolazzo

    2017-06-01

    Full Text Available From the emergency of many generational denominations and the intense relationship of children and youth with the digital artifacts (tablets, smartphones, among others, this article - registered under the theoretical framework of Cultural Studies in Education - aims to investigate in what context and under which conditions the production of digital children and young people has been possible. The study presents three movements: an overview of the generation concept; the mapping of academic narratives; an analysis of how these narratives are implicated in producing a type of generation and a “digital” education. The theoretical referential is based on authors such as Feixa and Leccardi, Tapscott, Prensky, Carr and Buckingham. The narratives also point out the beneits and dangers of technological immersion, which has permeated the convocation for the use of technological devices in school spaces.

  7. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  8. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    Science.gov (United States)

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  9. A meta-regression analysis of 41 Australian problem gambling prevalence estimates and their relationship to total spending on electronic gaming machines

    Directory of Open Access Journals (Sweden)

    Francis Markham

    2017-05-01

    Full Text Available Abstract Background Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. Methods A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Results Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by

  10. A meta-regression analysis of 41 Australian problem gambling prevalence estimates and their relationship to total spending on electronic gaming machines.

    Science.gov (United States)

    Markham, Francis; Young, Martin; Doran, Bruce; Sugden, Mark

    2017-05-23

    Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM) and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs) and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by the models (I 2  ≥ 0.97; R 2  ≤ 0.01). The

  11. PN solutions for the slowing-down and the cell calculation problems in plane geometry

    International Nuclear Information System (INIS)

    Caldeira, Alexandre David

    1999-01-01

    In this work P N solutions for the slowing-down and cell problems in slab geometry are developed. To highlight the main contributions of this development, one can mention: the new particular solution developed for the P N method applied to the slowing-down problem in the multigroup model, originating a new class of polynomials denominated Chandrasekhar generalized polynomials; the treatment of a specific situation, known as a degeneracy, arising from a particularity in the group constants and the first application of the P N method, for arbitrary N, in criticality calculations at the cell level reported in literature. (author)

  12. Communicating Treatment Risk Reduction to People With Low Numeracy Skills: A Cross-Cultural Comparison

    Science.gov (United States)

    2009-01-01

    Objectives. We sought to address denominator neglect (i.e. the focus on the number of treated and nontreated patients who died, without sufficiently considering the overall numbers of patients) in estimates of treatment risk reduction, and analyzed whether icon arrays aid comprehension. Methods. We performed a survey of probabilistic, national samples in the United States and Germany in July and August of 2008. Participants received scenarios involving equally effective treatments but differing in the overall number of treated and nontreated patients. In some conditions, the number who received a treatment equaled the number who did not; in others the number was smaller or larger. Some participants received icon arrays. Results. Participants—particularly those with low numeracy skills—showed denominator neglect in treatment risk reduction perceptions. Icon arrays were an effective method for eliminating denominator neglect. We found cross-cultural differences that are important in light of the countries' different medical systems. Conclusions. Problems understanding numerical information often reside not in the mind but in the problem's representation. These findings suggest suitable ways to communicate quantitative medical data. PMID:19833983

  13. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Murat [Dept. of Mechanical Engineering, Bayburt University, Bayburt (Turkmenistan); Gundogdu, Omer [Dept. of Mechanical Engineering, Ataturk University, Erzurum (Turkmenistan)

    2017-01-15

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed.

  14. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    International Nuclear Information System (INIS)

    Balci, Murat; Gundogdu, Omer

    2017-01-01

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed

  15. Journal Impact Factor: Do the Numerator and Denominator Need Correction?

    Science.gov (United States)

    Liu, Xue-Li; Gai, Shuang-Shuang; Zhou, Jing

    2016-01-01

    To correct the incongruence of document types between the numerator and denominator in the traditional impact factor (IF), we make a corresponding adjustment to its formula and present five corrective IFs: IFTotal/Total, IFTotal/AREL, IFAR/AR, IFAREL/AR, and IFAREL/AREL. Based on a survey of researchers in the fields of ophthalmology and mathematics, we obtained the real impact ranking of sample journals in the minds of peer experts. The correlations between various IFs and questionnaire score were analyzed to verify their journal evaluation effects. The results show that it is scientific and reasonable to use five corrective IFs for journal evaluation for both ophthalmology and mathematics. For ophthalmology, the journal evaluation effects of the five corrective IFs are superior than those of traditional IF: the corrective effect of IFAR/AR is the best, IFAREL/AR is better than IFTotal/Total, followed by IFTotal/AREL, and IFAREL/AREL. For mathematics, the journal evaluation effect of traditional IF is superior than those of the five corrective IFs: the corrective effect of IFTotal/Total is best, IFAREL/AR is better than IFTotal/AREL and IFAREL/AREL, and the corrective effect of IFAR/AR is the worst. In conclusion, not all disciplinary journal IF need correction. The results in the current paper show that to correct the IF of ophthalmologic journals may be valuable, but it seems to be meaningless for mathematic journals. PMID:26977697

  16. Estimation of photosynthesis in cyanobacteria by pulse-amplitude modulation chlorophyll fluorescence: problems and solutions.

    Science.gov (United States)

    Ogawa, Takako; Misumi, Masahiro; Sonoike, Kintake

    2017-09-01

    Cyanobacteria are photosynthetic prokaryotes and widely used for photosynthetic research as model organisms. Partly due to their prokaryotic nature, however, estimation of photosynthesis by chlorophyll fluorescence measurements is sometimes problematic in cyanobacteria. For example, plastoquinone pool is reduced in the dark-acclimated samples in many cyanobacterial species so that conventional protocol developed for land plants cannot be directly applied for cyanobacteria. Even for the estimation of the simplest chlorophyll fluorescence parameter, F v /F m , some additional protocol such as addition of DCMU or illumination of weak blue light is necessary. In this review, those problems in the measurements of chlorophyll fluorescence in cyanobacteria are introduced, and solutions to those problems are given.

  17. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  18. Predictors of Problem Gambling in the U.S.

    Science.gov (United States)

    Welte, John W; Barnes, Grace M; Tidwell, Marie-Cecile O; Wieczorek, William F

    2017-06-01

    In this article we examine data from a national U.S. adult survey of gambling to determine correlates of problem gambling and discuss them in light of theories of the etiology of problem gambling. These include theories that focus on personality traits, irrational beliefs, anti-social tendencies, neighborhood influences and availability of gambling. Results show that males, persons in the 31-40 age range, blacks, and the least educated had the highest average problem gambling symptoms. Adults who lived in disadvantaged neighborhoods also had the most problem gambling symptoms. Those who attended religious services most often had the fewest problem gambling symptoms, regardless of religious denomination. Respondents who reported that it was most convenient for them to gamble had the highest average problem gambling symptoms, compared to those for whom gambling was less convenient. Likewise, adults with the personality traits of impulsiveness and depression had more problem gambling symptoms than those less impulsive or depressed. Respondents who had friends who approve of gambling had more problem gambling symptoms than those whose friends did not approve of gambling. The results for the demographic variables as well as for impulsiveness and religious attendance are consistent with an anti-social/impulsivist pathway to problem gambling. The results for depression are consistent with an emotionally vulnerable pathway to problem gambling.

  19. Robust Wavelet Estimation to Eliminate Simultaneously the Effects of Boundary Problems, Outliers, and Correlated Noise

    Directory of Open Access Journals (Sweden)

    Alsaidi M. Altaher

    2012-01-01

    Full Text Available Classical wavelet thresholding methods suffer from boundary problems caused by the application of the wavelet transformations to a finite signal. As a result, large bias at the edges and artificial wiggles occur when the classical boundary assumptions are not satisfied. Although polynomial wavelet regression and local polynomial wavelet regression effectively reduce the risk of this problem, the estimates from these two methods can be easily affected by the presence of correlated noise and outliers, giving inaccurate estimates. This paper introduces two robust methods in which the effects of boundary problems, outliers, and correlated noise are simultaneously taken into account. The proposed methods combine thresholding estimator with either a local polynomial model or a polynomial model using the generalized least squares method instead of the ordinary one. A primary step that involves removing the outlying observations through a statistical function is considered as well. The practical performance of the proposed methods has been evaluated through simulation experiments and real data examples. The results are strong evidence that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating the effects of outliers and correlated noise.

  20. Cost estimation for solid waste management in industrialising regions – Precedents, problems and prospects

    International Nuclear Information System (INIS)

    Parthan, Shantha R.; Milke, Mark W.; Wilson, David C.; Cocks, John H.

    2012-01-01

    Highlights: ► We review cost estimation approaches for solid waste management. ► Unit cost method and benchmarking techniques used in industrialising regions (IR). ► Variety in scope, quality and stakeholders makes cost estimation challenging in IR. ► Integrate waste flow and cost models using cost functions to improve cost planning. - Abstract: The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories – the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.

  1. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  2. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    Science.gov (United States)

    Gao, Qian

    compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full

  3. Multimodal Analysis of Estimated and Observed Social Competence in Preschoolers With/Without Behavior Problems

    Directory of Open Access Journals (Sweden)

    Talita Pereira Dias

    2013-05-01

    Full Text Available Social skills compete with behavior problems, and the combination of these aspects may cause differences in social competence. This study was aimed at assessing the differences and similarities in the social competence of 26 preschoolers resulting from: (1 groups which they belonged to, being one with social skills and three with behavior problems (internalizing, externalizing and mixed; (2 types of assessment, considering the estimates of mothers and teachers, as well as direct observation in a structured situation; (3 structured situations as demands for five categories of social skills. Children’s performance in each situation was assessed by judges and estimated by mothers and teachers. There was a similarity in the social competence estimated by mothers, teachers and in the performance observed. Only the teachers distinguished the groups (higher social competence in the group with social skills and lower in the internalizing and mixed groups. Assertiveness demands differentiated the groups. The methodological aspects were discussed, as well as the clinical and educational potential of the structured situations to promote social skills.

  4. An inverse hyperbolic heat conduction problem in estimating surface heat flux by the conjugate gradient method

    International Nuclear Information System (INIS)

    Huang, C.-H.; Wu, H.-H.

    2006-01-01

    In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study

  5. Solution of axisymmetric transient inverse heat conduction problems using parameter estimation and multi block methods

    International Nuclear Information System (INIS)

    Azimi, A.; Hannani, S.K.; Farhanieh, B.

    2005-01-01

    In this article, a comparison between two iterative inverse techniques to solve simultaneously two unknown functions of axisymmetric transient inverse heat conduction problems in semi complex geometries is presented. The multi-block structured grid together with blocked-interface nodes is implemented for geometric decomposition of physical domain. Numerical scheme for solution of transient heat conduction equation is the finite element method with frontal technique to solve algebraic system of discrete equations. The inverse heat conduction problem involves simultaneous unknown time varying heat generation and time-space varying boundary condition estimation. Two parameter-estimation techniques are considered, Levenberg-Marquardt scheme and conjugate gradient method with adjoint problem. Numerically computed exact and noisy data are used for the measured transient temperature data needed in the inverse solution. The results of the present study for a configuration including two joined disks with different heights are compared to those of exact heat source and temperature boundary condition, and show good agreement. (author)

  6. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    DEFF Research Database (Denmark)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer Moesgaard

    2016-01-01

    problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter......State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible...

  7. PMU Placement Based on Heuristic Methods, when Solving the Problem of EPS State Estimation

    OpenAIRE

    I. N. Kolosok; E. S. Korkina; A. M. Glazunova

    2014-01-01

    Creation of satellite communication systems gave rise to a new generation of measurement equipment – Phasor Measurement Unit (PMU). Integrated into the measurement system WAMS, the PMU sensors provide a real picture of state of energy power system (EPS). The issues of PMU placement when solving the problem of EPS state estimation (SE) are discussed in many papers. PMU placement is a complex combinatorial problem, and there is not any analytical function to optimize its variables. Therefore,...

  8. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  9. 基于现金循环理论视角下的陕西省小面额人民币运行分析%The Analysis on the Small Denomination RMB Circulation under the Perspective of Cash Cycle Theory in Shaanxi Province

    Institute of Scientific and Technical Information of China (English)

    宋亮

    2014-01-01

    小面额人民币价值低,在流通流域超期服役,整洁度始终在低位徘徊,是长期制约现金服务质量的瓶颈。本文以现金四循环理论为分析视角,针对陕西省小面额人民币运行情况,逐一分析了陕西省小面额人民币循环特征,指出了管理中的薄弱环节,进而提出了提升陕西省小面额人民币流通服务管理水平的对策。%The value of small denomination RMB is low, the circulation period is more than the stipulated period, and its cleanli-ness is also at a low level, which is the bottleneck that restricts the cash service quality for a long time. As the point of view of cash cy-cle theory, the paper analyzes the characteristics of small denomination RMB circulation aiming at the situation of small denomination RMB operation, points out the weak links in the management, and puts forwards some corresponding countermeasures for promoting the service management level of small denomination RMB circulation in Shaanxi province.

  10. Efficacy of calf:cow ratios for estimating calf production of arctic caribou

    Science.gov (United States)

    Cameron, R.D.; Griffith, B.; Parrett, L.S.; White, R.G.

    2013-01-01

    Caribou (Rangifer tarandus granti) calf:cow ratios (CCR) computed from composition counts obtained on arctic calving grounds are biased estimators of net calf production (NCP, the product of parturition rate and early calf survival) for sexually-mature females. Sexually-immature 2-year-old females, which are indistinguishable from sexually-mature females without calves, are included in the denominator, thereby biasing the calculated ratio low. This underestimate increases with the proportion of 2-year-old females in the population. We estimated the magnitude of this error with deterministic simulations under three scenarios of calf and yearling annual survival (respectively: low, 60 and 70%; medium, 70 and 80%; high, 80 and 90%) for five levels of unbiased NCP: 20, 40, 60, 80, and 100%. We assumed a survival rate of 90% for both 2-year-old and mature females. For each NCP, we computed numbers of 2-year-old females surviving annually and increased the denominator of CCR accordingly. We then calculated a series of hypothetical “observed” CCRs, which stabilized during the last 6 years of the simulations, and documented the degree to which each 6-year mean CCR differed from the corresponding NCP. For the three calf and yearling survival scenarios, proportional underestimates of NCP by CCR ranged 0.046–0.156, 0.058–0.187, and 0.071–0.216, respectively. Unfortunately, because parturition and survival rates are typically variable (i.e., age distribution is unstable), the magnitude of the error is not predictable without substantial supporting information. We recommend maintaining a sufficient sample of known-age radiocollared females in each herd and implementing a regular relocation schedule during the calving period to obtain unbiased estimates of both parturition rate and NCP.

  11. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  12. Verification of functional a posteriori error estimates for obstacle problem in 1D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2013-01-01

    Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf

  13. Exact Quantization of the Even-Denominator Fractional Quantum Hall State at ν =5/2 Landau Level Filling Factor

    International Nuclear Information System (INIS)

    Pan, W.; Tsui, D.C.; Pan, W.; Xia, J.; Shvarts, V.; Adams, D.E.; Xia, J.; Shvarts, V.; Adams, D.E.; Stormer, H.L.; Stormer, H.L.; Pfeiffer, L.N.; Baldwin, K.W.; West, K.W.

    1999-01-01

    We report ultralow temperature experiments on the obscure fractional quantum Hall effect at Landau level filling factor ν=5/2 in a very high-mobility specimen of μ=1.7x10 7 cm 2 /V s . We achieve an electron temperature as low as ∼4 mK , where we observe vanishing R xx and, for the first time, a quantized Hall resistance, R xy =h/(5/2)e 2 to within 2ppm. R xy at the neighboring odd-denominator states ν=7/3 and 8/3 is also quantized. The temperature dependences of the R xx minima at these fractional fillings yield activation energy gaps Δ 5/2 =0.11 , Δ 7/3 =0.10 , and Δ 8/3 =0.055 K . copyright 1999 The American Physical Society

  14. FUNDAMENTAL MATRIX OF LINEAR CONTINUOUS SYSTEM IN THE PROBLEM OF ESTIMATING ITS TRANSPORT DELAY

    Directory of Open Access Journals (Sweden)

    N. A. Dudarenko

    2014-09-01

    Full Text Available The paper deals with the problem of quantitative estimation for transport delay of linear continuous systems. The main result is received by means of fundamental matrix of linear differential equations solutions specified in the normal Cauchy form for the cases of SISO and MIMO systems. Fundamental matrix has the dual property. It means that the weight function of the system can be formed as a free motion of systems. Last one is generated by the vector of initial system conditions, which coincides with the matrix input of the system being researched. Thus, using the properties of the system- solving for fundamental matrix has given the possibility to solve the problem of estimating transport linear continuous system delay without the use of derivation procedure in hardware environment and without formation of exogenous Dirac delta function. The paper is illustrated by examples. The obtained results make it possible to solve the problem of modeling the pure delay links using consecutive chain of aperiodic links of the first order with the equal time constants. Modeling results have proved the correctness of obtained computations. Knowledge of transport delay can be used when configuring multi- component technological complexes and in the diagnosis of their possible functional degeneration.

  15. Verification of functional a posteriori error estimates for obstacle problem in 2D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2014-01-01

    Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf

  16. On the problem of negative dissipation of fast waves at the fundamental ion cyclotron resonance and the accuracy of absorption estimates

    International Nuclear Information System (INIS)

    Castejon, F.; Pavlov, S.S.; Swanson, D. G.

    2002-01-01

    Negative dissipation appears when ion cyclotron resonance (ICR) heating at first harmonic in a thermal plasma is estimated using some numerical schemes. The causes of the appearance of such a problem are investigated analytically and numerically in this work showing that the problem is connected with the accuracy with which the absorption coefficient at the first ICR harmonic is estimated. The corrections for the absorption estimation are presented for the case of quasiperpendicular propagation of fast wave in this frequency range. A method to solve the problem of negative dissipation is presented and, as a result, an enhancement of absorption is found for reactor-size plasmas

  17. The Expected Loss in the Discretization of Multistage Stochastic Programming Problems - Estimation and Convergence Rate

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2009-01-01

    Roč. 165, č. 1 (2009), s. 29-45 ISSN 0254-5330 R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : multistage stochastic programming problems * approximation * discretization * Monte Carlo Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.961, year: 2009 http://library.utia.cas.cz/separaty/2008/E/smid-the expected loss in the discretization of multistage stochastic programming problems - estimation and convergence rate.pdf

  18. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  19. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  20. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  1. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  2. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  3. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    Science.gov (United States)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  4. Closed-form kinetic parameter estimation solution to the truncated data problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T

    2010-01-01

    In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.

  5. Empirical Estimates in Optimization Problems: Survey with Special Regard to Heavy Tails and Dependent Data

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 30 (2012), s. 92-111 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150; GA ČR GAP402/10/1610 Institutional support: RVO:67985556 Keywords : Stochastic optimization * empirical estimates * thin and heavy tails * independent and weak dependent random samples Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/kankova-empirical estimates in optimization problems survey with special regard to heavy tails and dependent data.pdf

  6. Effective hypernetted-chain study of even-denominator-filling state of the fractional quantum Hall effect

    International Nuclear Information System (INIS)

    Ciftja, O.

    1999-01-01

    The microscopic approach for studying the half-filled state of the fractional quantum Hall effect is based on the idea of proposing a trial Fermi wave function of the Jastrow-Slater form, which is then fully projected onto the lowest Landau level. A simplified starting point is to drop the projection operator and to consider an unprojected wave function. A recent study claims that such a wave function approximated in a Jastrow form may still constitute a good starting point on the study of the half-filled state. In this paper we formalize the effective hypernetted-chain approximation and apply it to the unprojected Fermi wave function, which describes the even-denominator-filling states. We test the above approximation by using the Fermi hypernetted-chain theory, which constitutes the natural choice for the present case. Our results suggest that the approximation of the Slater determinant of plane waves as a Jastrow wave function may not be a very accurate approximation. We conclude that the lowest Landau-level projection operator cannot be neglected if one wants a better quantitative understanding of the phenomena. copyright 1999 The American Physical Society

  7. Research Problems Associated with Limiting the Applied Force in Vibration Tests and Conducting Base-Drive Modal Vibration Tests

    Science.gov (United States)

    Scharton, Terry D.

    1995-01-01

    The intent of this paper is to make a case for developing and conducting vibration tests which are both realistic and practical (a question of tailoring versus standards). Tests are essential for finding things overlooked in the analyses. The best test is often the most realistic test which can be conducted within the cost and budget constraints. Some standards are essential, but the author believes more in the individual's ingenuity to solve a specific problem than in the application of standards which reduce problems (and technology) to their lowest common denominator. Force limited vibration tests and base-drive modal tests are two examples of realistic, but practical testing approaches. Since both of these approaches are relatively new, a number of interesting research problems exist, and these are emphasized herein.

  8. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  9. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-01

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded

  10. Ensemble Kalman Filtering with Residual Nudging: An Extension to State Estimation Problems with Nonlinear Observation Operators

    KAUST Repository

    Luo, Xiaodong

    2014-10-01

    The ensemble Kalman filter (EnKF) is an efficient algorithm for many data assimilation problems. In certain circumstances, however, divergence of the EnKF might be spotted. In previous studies, the authors proposed an observation-space-based strategy, called residual nudging, to improve the stability of the EnKF when dealing with linear observation operators. The main idea behind residual nudging is to monitor and, if necessary, adjust the distances (misfits) between the real observations and the simulated ones of the state estimates, in the hope that by doing so one may be able to obtain better estimation accuracy. In the present study, residual nudging is extended and modified in order to handle nonlinear observation operators. Such extension and modification result in an iterative filtering framework that, under suitable conditions, is able to achieve the objective of residual nudging for data assimilation problems with nonlinear observation operators. The 40-dimensional Lorenz-96 model is used to illustrate the performance of the iterative filter. Numerical results show that, while a normal EnKF may diverge with nonlinear observation operators, the proposed iterative filter remains stable and leads to reasonable estimation accuracy under various experimental settings.

  11. State and parameter estimation in nonlinear systems as an optimal tracking problem

    International Nuclear Information System (INIS)

    Creveling, Daniel R.; Gill, Philip E.; Abarbanel, Henry D.I.

    2008-01-01

    In verifying and validating models of nonlinear processes it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, we present a framework for connecting a data signal with a model in a way that minimizes the required coupling yet allows the estimation of unknown parameters in the model. The need to evaluate unknown parameters in models of nonlinear physical, biophysical, and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. Our approach builds on existing work that uses synchronization as a tool for parameter estimation. We address some of the critical issues in that work and provide a practical framework for finding an accurate solution. In particular, we show the equivalence of this problem to that of tracking within an optimal control framework. This equivalence allows the application of powerful numerical methods that provide robust practical tools for model development and validation

  12. The Problem With Estimating Public Health Spending.

    Science.gov (United States)

    Leider, Jonathon P

    2016-01-01

    Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to

  13. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  14. Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems

    DEFF Research Database (Denmark)

    Hubmer, Simon; Sherina, Ekaterina; Neubauer, Andreas

    2018-01-01

    . The main result of this paper is the verification of a nonlinearity condition in an infinite dimensional Hilbert space context. This condition guarantees convergence of iterative regularization methods. Furthermore, numerical examples for recovery of the Lam´e parameters from displacement data simulating......We consider a problem of quantitative static elastography, the estimation of the Lam´e parameters from internal displacement field data. This problem is formulated as a nonlinear operator equation. To solve this equation, we investigate the Landweber iteration both analytically and numerically...... a static elastography experiment are presented....

  15. Self-Tuning Blind Identification and Equalization of IIR Channels

    Directory of Open Access Journals (Sweden)

    Bose Tamal

    2003-01-01

    Full Text Available This paper considers self-tuning blind identification and equalization of fractionally spaced IIR channels. One recursive estimator is used to generate parameter estimates of the numerators of IIR systems, while the other estimates denominator of IIR channel. Equalizer parameters are calculated by solving Bezout type equation. It is shown that the numerator parameter estimates converge (a.s. toward a scalar multiple of the true coefficients, while the second algorithm provides consistent denominator estimates. It is proved that the equalizer output converges (a.s. to a scalar version of the actual symbol sequence.

  16. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2012-01-01

    We adopt a two-stage Method of Simulated Moments to estimate the preference parameters in a life-cycle consumption-saving model augmented with temptation disutility. Our approach estimates the parameters from the comparison between simulated moments with empirical moments observed in the US Survey

  17. Parameter Estimation as a Problem in Statistical Thermodynamics.

    Science.gov (United States)

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  18. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    Science.gov (United States)

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  19. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  20. Solving problems in social-ecological systems: definition, practice and barriers of transdisciplinary research.

    Science.gov (United States)

    Angelstam, Per; Andersson, Kjell; Annerstedt, Matilda; Axelsson, Robert; Elbakidze, Marine; Garrido, Pablo; Grahn, Patrik; Jönsson, K Ingemar; Pedersen, Simen; Schlyter, Peter; Skärbäck, Erik; Smith, Mike; Stjernquist, Ingrid

    2013-03-01

    Translating policies about sustainable development as a social process and sustainability outcomes into the real world of social-ecological systems involves several challenges. Hence, research policies advocate improved innovative problem-solving capacity. One approach is transdisciplinary research that integrates research disciplines, as well as researchers and practitioners. Drawing upon 14 experiences of problem-solving, we used group modeling to map perceived barriers and bridges for researchers' and practitioners' joint knowledge production and learning towards transdisciplinary research. The analysis indicated that the transdisciplinary research process is influenced by (1) the amount of traditional disciplinary formal and informal control, (2) adaptation of project applications to fill the transdisciplinary research agenda, (3) stakeholder participation, and (4) functional team building/development based on self-reflection and experienced leadership. Focusing on implementation of green infrastructure policy as a common denominator for the delivery of ecosystem services and human well-being, we discuss how to diagnose social-ecological systems, and use knowledge production and collaborative learning as treatments.

  1. In Lands of Foreign Currency Credit, Bank Lending Channels Run Through? The Effects of Monetary Policy at Home and Abroad on the Currency Denomination of the Supply of Credit

    OpenAIRE

    Steven Ongena; Ibolya Schindele; Dzsamila Vonnak

    2014-01-01

    We analyze the differential impact of domestic and foreign monetary policy on the local supply of bank credit in domestic and foreign currencies. We analyze a novel, supervisory dataset from Hungary that records all bank lending to firms including its currency denomination. Accounting for time-varying firm-specific heterogeneity in loan demand, we find that a lower domestic interest rate expands the supply of credit in the domestic but not in the foreign currency. A lower foreign interest rat...

  2. The common denominator between DSM and power quality

    International Nuclear Information System (INIS)

    Porter, G.

    1993-01-01

    As utilities implement programs to push for energy efficiency, one of the results may be an increased population of end-uses that have a propensity to be sensitive to power delivery abnormalities. Some may view this as the price we have to pay for increasing the functionality of bulk 60 Hz power. Others may view this as a good reason to stay away from DSM. The utility industry must view this situation as an important element of their DSM planning and reflect the costs of mitigating potential power quality problems in the overall program. Power quality mitigation costs will not drastically add much to the total DSM bill, but the costs of poor power quality could definitely negate the positive benefits of new technologies. Failure to properly plan for the sensitivities of energy efficient equipment will be a major mistake considering the solutions are fairly well known. Proper understanding, education, design, and protection, using a systems approach to problem solving, will ensure that power quality problems won't force us to abandon beneficial efficiency improvement programs

  3. Some Notes About Medical Vocabulary in 18th Century New Spain: Technical and Colloquial Words for the Denomination of Illnesses

    Directory of Open Access Journals (Sweden)

    José Luis RAMÍREZ LUENGO

    2016-06-01

    Full Text Available Whereas the 18th Century medical vocabulary is something that has been studied during recent years in Spain, the situation is very different in Latin America, where papers on this subject are very limited. In this case, this paper aims to study the denominations for illnesses that were discovered in a 18th Century New Spain document corpus: to do so, the corpus will be described and then the vocabulary used in the documents will be analysed; the paper will pay special attention to questions such as neologisms, fluctuating words and the presence of colloquial vocabulary. Thus, the purposes of the paper are three: 1 to demonstrate the importance of official documents for the study of medical vocabulary; 2 to provide some data for writing the history of this vocabulary; and 3 to note some analyses that should be done in the future. 

  4. Correction to h-zr.27t: the H in Zirc Hydride S(α,β) Data at 1200K in the ENDF71SaB Library

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-31

    A problem was reported in the S(α,β) data for H in Zirc Hydride at 1200 K. The particular S(α,β) data in question is denominated as h-zr.27t (or alternatively, h/zr.27t). Plots of the data and an explanation of the problem are given. A new S(α,β) file denominated as h-zr.28t is now the official data at 1200 K.

  5. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  6. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  7. Estimated incidence of influenza-associated severe acute respiratory infections in Indonesia, 2013-2016.

    Science.gov (United States)

    Susilarini, Ni K; Haryanto, Edy; Praptiningsih, Catharina Y; Mangiri, Amalya; Kipuw, Natalie; Tarya, Irmawati; Rusli, Roselinda; Sumardi, Gestafiana; Widuri, Endang; Sembiring, Masri M; Noviyanti, Widya; Widaningrum, Christina; Lafond, Kathryn E; Samaan, Gina; Setiawaty, Vivi

    2018-01-01

    Indonesia's hospital-based Severe Acute Respiratory Infection (SARI) surveillance system, Surveilans Infeksi Saluran Pernafasan Akut Berat Indonesia (SIBI), was established in 2013. While respiratory illnesses such as SARI pose a significant problem, there are limited incidence-based data on influenza disease burden in Indonesia. This study aimed to estimate the incidence of influenza-associated SARI in Indonesia during 2013-2016 at three existing SIBI surveillance sites. From May 2013 to April 2016, inpatients from sentinel hospitals in three districts of Indonesia (Gunung Kidul, Balikpapan, Deli Serdang) were screened for SARI. Respiratory specimens were collected from eligible inpatients and screened for influenza viruses. Annual incidence rates were calculated using these SIBI-enrolled influenza-positive SARI cases as a numerator, with a denominator catchment population defined through hospital admission survey (HAS) to identify respiratory-coded admissions by age to hospitals in the sentinel site districts. From May 2013 to April 2016, there were 1527 SARI cases enrolled, of whom 1392 (91%) had specimens tested and 199 (14%) were influenza-positive. The overall estimated annual incidence of influenza-associated SARI ranged from 13 to 19 per 100 000 population. Incidence was highest in children aged 0-4 years (82-114 per 100 000 population), followed by children 5-14 years (22-36 per 100 000 population). Incidence rates of influenza-associated SARI in these districts indicate a substantial burden of influenza hospitalizations in young children in Indonesia. Further studies are needed to examine the influenza burden in other potential risk groups such as pregnant women and the elderly. © 2017 The Authors. Influenza and Other Respiratory Viruses. Published by John Wiley & Sons Ltd.

  8. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  9. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  10. The statistics of Pearce element diagrams and the Chayes closure problem

    Science.gov (United States)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of

  11. Estimation of surface temperature by using inverse problem. Part 1. Steady state analyses of two-dimensional cylindrical system

    International Nuclear Information System (INIS)

    Takahashi, Toshio; Terada, Atsuhiko

    2006-03-01

    In the corrosive process environment of thermochemical hydrogen production Iodine-Sulfur process plant, there is a difficulty in the direct measurement of surface temperature of the structural materials. An inverse problem method can effectively be applied for this problem, which enables estimation of the surface temperature using the temperature data at the inside of structural materials. This paper shows analytical results of steady state temperature distributions in a two-dimensional cylindrical system cooled by impinging jet flow, and clarifies necessary order of multiple-valued function from the viewpoint of engineeringly satisfactory precision. (author)

  12. Design method for low order two-degree-of-freedom controller based on Pade approximation of the denominator series expansion

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1999-01-01

    Having advantages of setting independently feedback characteristics such as disturbance rejection specification and reference response characteristics, two-degree-of-freedom (2DOF) control is widely utilized to improve the control performance. The ordinary design method such as model matching usually derives high-ordered feedforward element of 2DOF controller. In this paper, we propose a new design method for low order feedforward element which is based on Pade approximation of the denominator series expansion. The features of the proposed method are as follows: (1) it is suited to realize reference response characteristics in low frequency region, (2) the order of the feedforward element can be selected apart from the feedback element. These are essential to the 2DOF controller design. With this method, 2DOF reactor power controller is designed and its control performance is evaluated by numerical simulation with reactor dynamics model. For this evaluation, it is confirmed that the controller designed by the proposed method possesses equivalent control characteristics to the controller by the ordinary model matching method. (author)

  13. Variational Multiscale error estimator for anisotropic adaptive fluid mechanic simulations: application to convection-diffusion problems

    OpenAIRE

    Bazile , Alban; Hachem , Elie; Larroya-Huguet , Juan-Carlos; Mesri , Youssef

    2018-01-01

    International audience; In this work, we present a new a posteriori error estimator based on the Variational Multiscale method for anisotropic adaptive fluid mechanics problems. The general idea is to combine the large scale error based on the solved part of the solution with the sub-mesh scale error based on the unresolved part of the solution. We compute the latter with two different methods: one using the stabilizing parameters and the other using bubble functions. We propose two different...

  14. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  15. Stochastic differential equations as a tool to regularize the parameter estimation problem for continuous time dynamical systems given discrete time measurements.

    Science.gov (United States)

    Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats

    2014-05-01

    In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Comparison Between Two Methods for Estimating the Vertical Scale of Fluctuation for Modeling Random Geotechnical Problems

    Science.gov (United States)

    Pieczyńska-Kozłowska, Joanna M.

    2015-12-01

    The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.

  17. Comprehensive Study of Honey with Protected Denomination of Origin and Contribution to the Enhancement of Legal Specifications

    Directory of Open Access Journals (Sweden)

    Leticia M. Estevinho

    2012-07-01

    Full Text Available In this study the characterization of a total of 60 honey samples with Protected Denomination of Origin (PDO collected over three harvests (2009–2011, inclusive, from the Northeast of Portugal was carried out based on the presence of pollen, physicochemical and microbiological characteristics. All samples were found to meet the European Legislation, but some didn’t meet the requirements of the PDO specifications. Concerning the floral origin of honey, our results showed the prevalence of rosemary (Lavandula pedunculata pollen. The microbiological quality of all the analyzed samples was satisfactory, since fecal coliforms, sulfite-reducing clostridia and Salmonella were absent, and molds and yeasts were detected in low counts. Significant differences between the results were studied using one-way analysis of variance (ANOVA, followed by Tukey’s HSD test. The samples were submitted to discriminant function analysis, in order to determine which variables differentiate between two or more naturally occurring groups (Forward Stepwise Analysis. The variables selected were in this order: diastase activity, pH, reducing sugars, free acidity and HMF. The pollen spectrum has perfect discriminatory power. This is the first study in which a honey with PDO was tested, in order to assess its compliance with the PDO book of specifications.

  18. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  19. Estimation of the Thermophysical Properties of the Soil together with Sensors' Positions by Inverse Problem

    OpenAIRE

    Mansour , Salwa; Canot , Edouard; Delannay , Renaud; March , Ramiro J.; Cordero , José Agustin; Carlos Ferreri , Juan

    2015-01-01

    The report is basically divided into two main parts. In the first part, we introduce a numerical strategy in both 1D and 3D axisymmetric coordinate systems to estimate the thermophysical properties of the soil (volumetric heat capacity (ρC)s , thermal conductivity λs and porosity φ) of a saturated porous medium where a phase change problem (liquid/vapor) appears due to intense heating from above. Usually φ is the true porosity, however when the soil is not saturated (which should concern most...

  20. Estimations of the relationship a dimensional budyko in Colombia

    International Nuclear Information System (INIS)

    Arias GomezPaula Andrea; Poveda Jaramillo, German

    2007-01-01

    Water and energy budgets in river basins are a condition for terrain forms growing and spatial distribution and productivity of vegetation. The Budyko non-dimensional number is defined as the relationship between mean annual precipitation (P) and mean annual evapotranspiration (PET), B = P/PET, in river basins. this non-dimensional number is defined as the ratio between available water (P) and available energy (PET), and has been employed for identifying water storage at vegetation, dryness, and net primary production in ecosystems. Literature reports have found that at B = 1 condition, denominated Budyko critical condition, B c , there exist particular climate, geomorphologic and biodiversity conditions which make that this number becomes from particular interest on hydro climatology and ecology fields. Budyko number maps with a spatial resolution of 5 arcmin cell size for Colombia extent, and other with 30 seconds cell size for Antioquia extent are presented. Many non-direct methods for potential evapotranspiration estimation have been employed. It concludes that Colombia is characterized by energy limited vegetation

  1. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  2. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  3. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  4. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  5. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    KAUST Repository

    Dashti, M.; Law, K. J H; Stuart, A. M.; Voss, J.

    2013-01-01

    with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.

  6. Lagged life cycle structures for food products: Their role in global marketing, their determinants and some problems in their estimation

    DEFF Research Database (Denmark)

    Baadsgaard, Allan; Gede, Mads Peter; Grunert, Klaus G.

    cycles for different product categories may be lagged (type II lag) because changes in economic and other factors will result in demands for different products. Identifying lagged life cycle structures major importance in global marketing of food products. The problems in arriving at such estimates...

  7. A combined ANN-GA and experimental based technique for the estimation of the unknown heat flux for a conjugate heat transfer problem

    Science.gov (United States)

    M K, Harsha Kumar; P S, Vishweshwara; N, Gnanasekaran; C, Balaji

    2018-05-01

    The major objectives in the design of thermal systems are obtaining the information about thermophysical, transport and boundary properties. The main purpose of this paper is to estimate the unknown heat flux at the surface of a solid body. A constant area mild steel fin is considered and the base is subjected to constant heat flux. During heating, natural convection heat transfer occurs from the fin to ambient. The direct solution, which is the forward problem, is developed as a conjugate heat transfer problem from the fin and the steady state temperature distribution is recorded for any assumed heat flux. In order to model the natural convection heat transfer from the fin, an extended domain is created near the fin geometry and air is specified as a fluid medium and Navier Stokes equation is solved by incorporating the Boussinesq approximation. The computational time involved in executing the forward model is then reduced by developing a neural network (NN) between heat flux values and temperatures based on back propagation algorithm. The conjugate heat transfer NN model is now coupled with Genetic algorithm (GA) for the solution of the inverse problem. Initially, GA is applied to the pure surrogate data, the results are then used as input to the Levenberg- Marquardt method and such hybridization is proven to result in accurate estimation of the unknown heat flux. The hybrid method is then applied for the experimental temperature to estimate the unknown heat flux. A satisfactory agreement between the estimated and actual heat flux is achieved by incorporating the hybrid method.

  8. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2009-01-01

    We perform a structural estimation of the preference parameters in a buffer-stock consumption model augmented with temptation disutility. We adopt a two-stage Method of Simulated Moments methodology to match our simulated moments with those observed in the US Survey of Consumer Finances. To identify

  9. A State-of-the-Art Review of the Sensor Location, Flow Observability, Estimation, and Prediction Problems in Traffic Networks

    Directory of Open Access Journals (Sweden)

    Enrique Castillo

    2015-01-01

    Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.

  10. A theoretical approach to the problem of dose-volume constraint estimation and their impact on the dose-volume histogram selection

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Stavrev, Pavel; Stavreva, Nadia; Fallone, B. Gino

    2006-01-01

    This paper outlines a theoretical approach to the problem of estimating and choosing dose-volume constraints. Following this approach, a method of choosing dose-volume constraints based on biological criteria is proposed. This method is called ''reverse normal tissue complication probability (NTCP) mapping into dose-volume space'' and may be used as a general guidance to the problem of dose-volume constraint estimation. Dose-volume histograms (DVHs) are randomly simulated, and those resulting in clinically acceptable levels of complication, such as NTCP of 5±0.5%, are selected and averaged producing a mean DVH that is proven to result in the same level of NTCP. The points from the averaged DVH are proposed to serve as physical dose-volume constraints. The population-based critical volume and Lyman NTCP models with parameter sets taken from literature sources were used for the NTCP estimation. The impact of the prescribed value of the maximum dose to the organ, D max , on the averaged DVH and the dose-volume constraint points is investigated. Constraint points for 16 organs are calculated. The impact of the number of constraints to be fulfilled based on the likelihood that a DVH satisfying them will result in an acceptable NTCP is also investigated. It is theoretically proven that the radiation treatment optimization based on physical objective functions can sufficiently well restrict the dose to the organs at risk, resulting in sufficiently low NTCP values through the employment of several appropriate dose-volume constraints. At the same time, the pure physical approach to optimization is self-restrictive due to the preassignment of acceptable NTCP levels thus excluding possible better solutions to the problem

  11. Assessing the performance of dynamical trajectory estimates

    Science.gov (United States)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  12. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  13. Costs and benefits of proliferation of Christian denominations in ...

    African Journals Online (AJOL)

    The unbridled proliferation of Churches in Nigeria has steered up concerns among adherents of religious faiths, onlookers and academics alike. Nigerian society today is undergoing significant constant proliferation of Churches which has brought not only changing values, but also source of solutions to people's problems.

  14. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    Science.gov (United States)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  15. Using the virtual-abstract instructional sequence to teach addition of fractions.

    Science.gov (United States)

    Bouck, Emily C; Park, Jiyoon; Sprick, Jessica; Shurr, Jordan; Bassette, Laura; Whorley, Abbie

    2017-11-01

    Limited literature examines mathematics education for students with mild intellectual disability. This study investigated the effects of using the Virtual-Abstract instructional sequenceto teach middle school students, predominantly with mild intellectual disability, to add fractions of unlike denominators. Researchers used a multiple probe across participants design to determine if a functional relation existed between the Virtual-Abstract instructional sequence strategy and students' ability to add fractions with unlike denominators. The study of consisted of three-to-nine baseline sessions, 6-11 intervention sessions, and two maintenance sessions for each student. Data were collected on accuracy across five addition of fractions with unlike denominators problems. The VA instructional strategy was effective in thestudents to add fractions with unlike denominators; a functional relation existed between the VA instructional sequence and adding fractions with unlike denominators for three of the four students. The Virtual-Abstract instructional sequencemay be appropriate to support students with mild intellectual disability in learning mathematics, especially when drawing or representing the mathematical concepts may prove challenging. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. On parameterization of the inverse problem for estimating aquifer properties using tracer data

    International Nuclear Information System (INIS)

    Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan

    2012-01-01

    We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.

  17. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  18. Sparse DOA estimation with polynomial rooting

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren

    2015-01-01

    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  19. A Carleman estimate and the balancing principle in the quasi-reversibility method for solving the Cauchy problem for the Laplace equation

    International Nuclear Information System (INIS)

    Cao Hui; Pereverzev, Sergei V; Klibanov, Michael V

    2009-01-01

    The quasi-reversibility method of solving the Cauchy problem for the Laplace equation in a bounded domain Ω is considered. With the help of the Carleman estimation technique improved error and stability bounds in a subdomain Ω σ is a subset of Ω are obtained. This paves the way for the use of the balancing principle for an a posteriori choice of the regularization parameter ε in the quasi-reversibility method. As an adaptive regularization parameter choice strategy, the balancing principle does not require a priori knowledge of either the solution smoothness or a constant K appearing in the stability bound estimation. Nevertheless, this principle allows an a posteriori parameter choice that up to a controllable constant achieves the best accuracy guaranteed by the Carleman estimate

  20. Fall in hematocrit per 1000 parasites cleared from peripheral blood: a simple method for estimating drug-related fall in hematocrit after treatment of malaria infections.

    Science.gov (United States)

    Gbotosho, Grace Olusola; Okuboyejo, Titilope; Happi, Christian Tientcha; Sowunmi, Akintunde

    2014-01-01

    A simple method to estimate antimalarial drug-related fall in hematocrit (FIH) after treatment of Plasmodium falciparum infections in the field is described. The method involves numeric estimation of the relative difference in hematocrit at baseline (pretreatment) and the first 1 or 2 days after treatment begun as numerator and the corresponding relative difference in parasitemia as the denominator, and expressing it per 1000 parasites cleared from peripheral blood. Using the method showed that FIH/1000 parasites cleared from peripheral blood (cpb) at 24 or 48 hours were similar in artemether-lumefantrine and artesunate-amodiaquine-treated children (0.09; 95% confidence interval, 0.052-0.138 vs 0.10; 95% confidence interval, 0.069-0.139%; P = 0.75) FIH/1000 parasites cpb in patients with higher parasitemias were significantly (P 1000 parasites cpb were similar in anemic and nonanemic children. Estimation of FIH/1000 parasites cpb is simple, allows estimation of relatively conserved hematocrit during treatment, and can be used in both observational studies and clinical trials involving antimalarial drugs.

  1. Problems of Assessment in Religious and Moral Education: The Scottish Case

    Science.gov (United States)

    Grant, Lynne; Matemba, Yonah H.

    2013-01-01

    This article is concerned with assessment issues in Religious and Moral Education (RME) offered in Scottish non-denominational schools. The analysis of the findings in this article is weighed against the framework of the new "3-18" Scottish curriculum called "Curriculum for Excellence" (CfE). CfE was introduced in primary…

  2. Problems of estimation of water content history of loesses

    International Nuclear Information System (INIS)

    Rendell, H.M.

    1983-01-01

    The estimation of 'mean water content' is a major source of error in the TL dating of many sediments. The engineering behaviour of loesses can be used, under certain circumstances, to interfer their content history. The construction of 'stress history' for particular loesses is therefore proposed in order to establish the critical conditions of moisture and applied stress (overburden) at which irreversible structural change occurs. A programme of field and laboratory tests should enable more precise estimates of water content history to be made. (author)

  3. An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems

    International Nuclear Information System (INIS)

    Fazli, Roohallah; Nakhkash, Mansor

    2012-01-01

    This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)

  4. Recognition of Action as a Bayesian Parameter Estimation Problem over Time

    DEFF Research Database (Denmark)

    Krüger, Volker

    2007-01-01

    In this paper we will discuss two problems related to action recognition: The first problem is the one of identifying in a surveillance scenario whether a person is walking or running and in what rough direction. The second problem is concerned with the recovery of action primitives from observed...... complex actions. Both problems will be discussed within a statistical framework. Bayesian propagation over time offers a framework to treat likelihood observations at each time step and the dynamics between the time steps in a unified manner. The first problem will be approached as a patter recognition...... of the Bayesian framework for action recognition and round up our discussion....

  5. Electrochemical estimation on the applicability of nickel plating to EAC problems in CRDM nozzle

    International Nuclear Information System (INIS)

    Oh, Si Hyoung; Hwang, Il Soon

    2002-01-01

    The applicability of nickel-plating to EAC problems in CRDM nozzle was estimated in the light of electrochemical aspect. The passive film growth law for nickel was improved to include oxide dissolution rate improving conventional point defect model to explain retarded passivation of plated nickel in PWR primary side water environment and compared with experimental data. According to this model, oxide growth and passivation current is closely related with oxide dissolution rate because steady state is made only if oxide formation and oxide destruction rate are same, from which oxide dissolution rate constant, k s , was quantitatively obtained utilizing experimental data. Commonly observed current-time behavior, i∝t m ,where m is different from 1 or 0.5, for passive film formation can be accounted for by virtue of enhanced oxide dissolution in high temperature aqueous environment

  6. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  7. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  8. A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

    OpenAIRE

    Finlayson, Graham; Hemrit, Ghalia; Gijsenij, Arjan; Gehler, Peter

    2017-01-01

    In illuminant estimation, we attempt to estimate the RGB of the light. We then use this estimate on an image to correct for the light's colour bias. Illuminant estimation is an essential component of all camera reproduction pipelines. How well an illuminant estimation algorithm works is determined by how well it predicts the ground truth illuminant colour. Typically, the ground truth is the RGB of a white surface placed in a scene. Over a large set of images an estimation error is calculated ...

  9. Data assimilation and uncertainty analysis of environmental assessment problems--an application of Stochastic Transfer Function and Generalised Likelihood Uncertainty Estimation techniques

    International Nuclear Information System (INIS)

    Romanowicz, Renata; Young, Peter C.

    2003-01-01

    Stochastic Transfer Function (STF) and Generalised Likelihood Uncertainty Estimation (GLUE) techniques are outlined and applied to an environmental problem concerned with marine dose assessment. The goal of both methods in this application is the estimation and prediction of the environmental variables, together with their associated probability distributions. In particular, they are used to estimate the amount of radionuclides transferred to marine biota from a given source: the British Nuclear Fuel Ltd (BNFL) repository plant in Sellafield, UK. The complexity of the processes involved, together with the large dispersion and scarcity of observations regarding radionuclide concentrations in the marine environment, require efficient data assimilation techniques. In this regard, the basic STF methods search for identifiable, linear model structures that capture the maximum amount of information contained in the data with a minimal parameterisation. They can be extended for on-line use, based on recursively updated Bayesian estimation and, although applicable to only constant or time-variable parameter (non-stationary) linear systems in the form used in this paper, they have the potential for application to non-linear systems using recently developed State Dependent Parameter (SDP) non-linear STF models. The GLUE based-methods, on the other hand, formulate the problem of estimation using a more general Bayesian approach, usually without prior statistical identification of the model structure. As a result, they are applicable to almost any linear or non-linear stochastic model, although they are much less efficient both computationally and in their use of the information contained in the observations. As expected in this particular environmental application, it is shown that the STF methods give much narrower confidence limits for the estimates due to their more efficient use of the information contained in the data. Exploiting Monte Carlo Simulation (MCS) analysis

  10. Currency features for visually impaired people

    Science.gov (United States)

    Hyland, Sandra L.; Legge, Gordon E.; Shannon, Robert R.; Baer, Norbert S.

    1996-03-01

    The estimated 3.7 million Americans with low vision experience a uniquely difficult task in identifying the denominations of U.S. banknotes because the notes are remarkably uniform in size, color, and general design. The National Research Council's Committee on Currency Features Usable by the Visually Impaired assessed features that could be used by people who are visually disabled to distinguish currency from other documents and to denominate and authenticate banknotes using available technology. Variation of length and height, introduction of large numerals on a uniform, high-contrast background, use of different colors for each of the six denominations printed, and the introduction of overt denomination codes that could lead to development of effective, low-cost devices for examining banknotes were all deemed features available now. Issues affecting performance, including the science of visual and tactile perception, were addressed for these features, as well as for those features requiring additional research and development. In this group the committee included durable tactile features such as those printed with transparent ink, and the production of currency with holes to indicate denomination. Among long-range approaches considered were the development of technologically advanced devices and smart money.

  11. Dominant color and texture feature extraction for banknote discrimination

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-07-01

    Banknote discrimination with image recognition technology is significant in many applications. The traditional methods based on image recognition only recognize the banknote denomination without discriminating the counterfeit banknote. To solve this problem, we propose a systematical banknote discrimination approach with the dominant color and texture features. After capturing the visible and infrared images of the test banknote, we first implement the tilt correction based on the principal component analysis (PCA) algorithm. Second, we extract the dominant color feature of the visible banknote image to recognize the denomination. Third, we propose an adaptively weighted local binary pattern with "delta" tolerance algorithm to extract the texture features of the infrared banknote image. At last, we discriminate the genuine or counterfeit banknote by comparing the texture features between the test banknote and the benchmark banknote. The proposed approach is tested using 14,000 banknotes of six different denominations from Chinese yuan (CNY). The experimental results show 100% accuracy for denomination recognition and 99.92% accuracy for counterfeit banknote discrimination.

  12. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    Science.gov (United States)

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  13. Some problems on Monte Carlo method development

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software

  14. Covariance expressions for eigenvalue and eigenvector problems

    Science.gov (United States)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  15. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  16. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  17. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  18. Theory of the Anderson impurity model: The Schrieffer endash Wolff transformation reexamined

    International Nuclear Information System (INIS)

    Kehrein, S.K.; Mielke, A.

    1996-01-01

    We test the method of infinitesimal unitary transformations recently introduced by Wegner on the Anderson single impurity model. It is demonstrated that infinitesimal unitary transformations in contrast to the Schrieffer endash Wolff transformation allow the construction of an effective Kondo Hamiltonian consistent with the established results in this well understood model. The main reason for this is the intrinsic energy scale separation of Wegner close-quote s approach with respect to arbitrary energy differences coupled by matrix elements. This allows the construction of an effective Hamiltonian without facing a vanishing energy denominator problem. Similar energy denominator problems are troublesome in many models. Infinitesimal unitary transformations have the potential to provide a general framework for the systematic derivation of effective Hamiltonians without such problems. Copyright copyright 1996 Academic Press, Inc

  19. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  20. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations

  1. ACTUAL PROBLEMS OF THE ESTIMATION OF COMPETITIVENESS OF THE BRAND

    Directory of Open Access Journals (Sweden)

    Sevostyanova O. G.

    2016-03-01

    Full Text Available The increase in a share of brand sales in the market predetermines the big spectrum of application of estimated cost of a brand as major of company actives. In article advantages and lacks of private techniques of estimation of cost of brand Interbrand and V-RATIO are analyzed. It is shown, that definition of cost characteristics of a brand is a valuable source of the information at strategic management of the company, hence, the major making competitiveness of trade enterprise.

  2. Estimating the Effect and Economic Impact of Absenteeism, Presenteeism, and Work Environment-Related Problems on Reductions in Productivity from a Managerial Perspective.

    Science.gov (United States)

    Strömberg, Carl; Aboagye, Emmanuel; Hagberg, Jan; Bergström, Gunnar; Lohela-Karlsson, Malin

    2017-09-01

    The aim of this study was to propose wage multipliers that can be used to estimate the costs of productivity loss for employers in economic evaluations, using detailed information from managers. Data were collected in a survey panel of 758 managers from different sectors of the labor market. Based on assumed scenarios of a period of absenteeism due to sickness, presenteeism and work environment-related problem episodes, and specified job characteristics (i.e., explanatory variables), managers assessed their impact on group productivity and cost (i.e., the dependent variable). In an ordered probit model, the extent of productivity loss resulting from job characteristics is predicted. The predicted values are used to derive wage multipliers based on the cost of productivity estimates provided by the managers. The results indicate that job characteristics (i.e., degree of time sensitivity of output, teamwork, or difficulty in replacing a worker) are linked to productivity loss as a result of health-related and work environment-related problems. The impact of impaired performance on productivity differs among various occupations. The mean wage multiplier is 1.97 for absenteeism, 1.70 for acute presenteeism, 1.54 for chronic presenteeism, and 1.72 for problems related to the work environment. This implies that the costs of health-related and work environment-related problems to organizations can exceed the worker's wage. The use of wage multipliers is recommended for calculating the cost of health-related and work environment-related productivity loss to properly account for actual costs. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  3. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  4. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  5. Inverse problems in vision and 3D tomography

    CERN Document Server

    Mohamad-Djafari, Ali

    2013-01-01

    The concept of an inverse problem is a familiar one to most scientists and engineers, particularly in the field of signal and image processing, imaging systems (medical, geophysical, industrial non-destructive testing, etc.) and computer vision. In imaging systems, the aim is not just to estimate unobserved images, but also their geometric characteristics from observed quantities that are linked to these unobserved quantities through the forward problem. This book focuses on imagery and vision problems that can be clearly written in terms of an inverse problem where an estimate for the image a

  6. Perturbed asymptotically linear problems

    OpenAIRE

    Bartolo, R.; Candela, A. M.; Salvatore, A.

    2012-01-01

    The aim of this paper is investigating the existence of solutions of some semilinear elliptic problems on open bounded domains when the nonlinearity is subcritical and asymptotically linear at infinity and there is a perturbation term which is just continuous. Also in the case when the problem has not a variational structure, suitable procedures and estimates allow us to prove that the number of distinct crtitical levels of the functional associated to the unperturbed problem is "stable" unde...

  7. Characterization of grape seed oil from wines with protected denomination of origin (PDO from Spain

    Directory of Open Access Journals (Sweden)

    Bada, J. C.

    2015-09-01

    Full Text Available The aim of this study was to determine the composition and characteristics of red grape seed oils (Vitis vinifera L from wines with protected denomination of origin (PDO from Spain. Eight representative varieties of grape seed oils from the Spanish wine Ribera del Duero (Tempranillo, Toro (Tempranillo, Rioja (Garnacha, Valencia (Tempranillo and Cangas (Mencia, Carrasquín, Albarín and Verdejo were studied. The oil content of the seeds ranged from 13.89 to 10.18%, and the moisture was similar for all the seeds. Linoleic acid was the most abundant fatty acid in all samples, representing around 78%, followed by oleic acid with a concentration close 16%, the degree of unsaturation in the grape seed oil was over 90%. β-sitosterol and α-tocopherol were the main sterol and tocopherol, reaching values of 77.31% and 3.82 mg·100 g−1 of oil, respectively. In relation to the tocotrienols, α-tocotrienol was the main tocotrienol and accounted for 13.18 mg·100 g−1 of oil.El objetivo de este estudio consistió en determinar la composición y características de aceites de semillas de uvas rojas (Vitis vinifera L de vinos con denominación de origen protegida (DOP de España. Ocho variedades representativas de aceites de semillas de uvas españolas Ribera del Duero (Tempranillo, Toro (Tempranillo, Rioja (Garnacha, Valencia (Tempranillo y Cangas (Mencia, Carrasquín, Albarín y Verdejo fueron estudiadas. Los contenidos en aceite de las semillas oscilaron entre 13.89 y 10.18%, la humedad fué similar para todas las semillas. El contenido en ácido linoléico fué alto en todos los aceites alcanzando un valor del 78%, seguido del ácido oléico con una concentración cercana al 16%, registrando un grado total de insaturación del 90%. b-sitosterol y α-tocoferol fué el principal esterol y tocoferol, alcanzado niveles del 77.31% y de un 3.82 mg·100 g−1 de aceite respectivamente. En relación a los tocotrienoles, α-tocotrienol fué el mayoritario con

  8. ITOUGH2 sample problems

    International Nuclear Information System (INIS)

    Finsterle, S.

    1997-11-01

    This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model

  9. A posteriori error estimates for axisymmetric and nonlinear problems

    Czech Academy of Sciences Publication Activity Database

    Křížek, Michal; Němec, J.; Vejchodský, Tomáš

    2001-01-01

    Roč. 15, - (2001), s. 219-236 ISSN 1019-7168 R&D Projects: GA ČR GA201/01/1200; GA MŠk ME 148 Keywords : weigted Sobolev spaces%a posteriori error estimates%finite elements Subject RIV: BA - General Mathematics Impact factor: 0.886, year: 2001

  10. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    International Nuclear Information System (INIS)

    Hohage, Thorsten; Werner, Frank

    2016-01-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

  11. Anisotropic Density Estimation in Global Illumination

    DEFF Research Database (Denmark)

    Schjøth, Lars

    2009-01-01

    Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...

  12. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    Science.gov (United States)

    Phanomchoeng, Gridsada

    A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is

  13. $L^2$ estimates for the $\\bar \\partial$ operator

    OpenAIRE

    McNeal, Jeffery D.; Varolin, Dror

    2015-01-01

    This is a survey article about $L^2$ estimates for the $\\bar \\partial$ operator. After a review of the basic approach that has come to be called the "Bochner-Kodaira Technique", the focus is on twisted techniques and their applications to estimates for $\\bar \\partial$, to $L^2$ extension theorems, and to other problems in complex analysis and geometry, including invariant metric estimates and the $\\bar \\partial$-Neumann Problem.

  14. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  15. Developmental and individual differences in pure numerical estimation.

    Science.gov (United States)

    Booth, Julie L; Siegler, Robert S

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.

  16. Precision Parameter Estimation and Machine Learning

    Science.gov (United States)

    Wandelt, Benjamin D.

    2008-12-01

    I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.

  17. Estimating state-contingent production functions

    DEFF Research Database (Denmark)

    Rasmussen, Svend; Karantininis, Kostas

    The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...

  18. The Colonial Situation: Complicities and Distinctions from the Surrealist Image

    Directory of Open Access Journals (Sweden)

    Pedro Pablo Gómez

    2011-05-01

    Full Text Available In this work, taking as baseline the thought of Aimé Césaire and Franz Fanon —keeping in mind the closeness of the Negritude movement with surrealism—, we propose to approach the modernity/coloniality problem, appealing to the denominated surrealist image of beauty. In the first part the colonial situation is approached, in the second the colonial situation from the logic of surrealist image, and in the third the possibility of a decolonial universal or pluriversal is raised. In general terms, exploring the existent link between the “surrealist image” and the colonial structure of modernity —that generates the denominated colonial situation—, we aspire to approach what could be a decolonial aesthetic that, as general problem, will be tackled in later works.

  19. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  20. Work related injuries: estimating the incidence among illegally employed immigrants

    Directory of Open Access Journals (Sweden)

    Fadda Emanuela

    2010-12-01

    Full Text Available Abstract Background Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. Findings The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy; and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. Conclusions The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL probably underestimates the incidence of these accidents in Italy.

  1. Work related injuries: estimating the incidence among illegally employed immigrants.

    Science.gov (United States)

    Mastrangelo, Giuseppe; Rylander, Ragnar; Buja, Alessandra; Marangi, Gianluca; Fadda, Emanuela; Fedeli, Ugo; Cegolon, Luca

    2010-12-08

    Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy); and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU) immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL) probably underestimates the incidence of these accidents in Italy.

  2. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  3. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  4. Combining Facial Dynamics With Appearance for Age Estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Alnajar, F.; Salah, A.A.; Gevers, T.

    2015-01-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We

  5. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...

  6. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  7. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  8. Calculation of statistic estimates of kinetic parameters from substrate uncompetitive inhibition equation using the median method

    Directory of Open Access Journals (Sweden)

    Pedro L. Valencia

    2017-04-01

    Full Text Available We provide initial rate data from enzymatic reaction experiments and tis processing to estimate the kinetic parameters from the substrate uncompetitive inhibition equation using the median method published by Eisenthal and Cornish-Bowden (Cornish-Bowden and Eisenthal, 1974; Eisenthal and Cornish-Bowden, 1974. The method was denominated the direct linear plot and consists in the calculation of the median from a dataset of kinetic parameters Vmax and Km from the Michaelis–Menten equation. In this opportunity we present the procedure to applicate the direct linear plot to the substrate uncompetitive inhibition equation; a three-parameter equation. The median method is characterized for its robustness and its insensibility to outlier. The calculations are presented in an Excel datasheet and a computational algorithm was developed in the free software Python. The kinetic parameters of the substrate uncompetitive inhibition equation Vmax, Km and Ks were calculated using three experimental points from the dataset formed by 13 experimental points. All the 286 combinations were calculated. The dataset of kinetic parameters resulting from this combinatorial was used to calculate the median which corresponds to the statistic estimator of the real kinetic parameters. A comparative statistical analyses between the median method and the least squares was published in Valencia et al. [3].

  9. Some statistical problems inherent in radioactive-source detection

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1978-01-01

    Some of the statistical questions associated with problems of detecting random-point-process signals embedded in random-point-process noise are examined. An example of such a problem is that of searching for a lost radioactive source with a moving detection system. The emphasis is on theoretical questions, but some experimental and Monte Carlo results are used to test the theoretical results. Several idealized binary decision problems are treated by starting with simple, specific situations and progressing toward more general problems. This sequence of decision problems culminates in the minimum-cost-expectation rule for deciding between two Poisson processes with arbitrary intensity functions. As an example, this rule is then specialized to the detector-passing-a-point-source decision problem. Finally, Monte Carlo techniques are used to develop and test one estimation procedure: the maximum-likelihood estimation of a parameter in the intensity function of a Poisson process. For the Monte Carlo test this estimation procedure is specialized to the detector-passing-a-point-source case. Introductory material from probability theory is included so as to make the report accessible to those not especially conversant with probabilistic concepts and methods. 16 figures

  10. Essays on variational approximation techniques for stochastic optimization problems

    Science.gov (United States)

    Deride Silva, Julio A.

    This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence

  11. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  12. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  13. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  14. Aboveground Forest Biomass Estimation with Landsat and LiDAR Data and Uncertainty Analysis of the Estimates

    Directory of Open Access Journals (Sweden)

    Dengsheng Lu

    2012-01-01

    Full Text Available Landsat Thematic mapper (TM image has long been the dominate data source, and recently LiDAR has offered an important new structural data stream for forest biomass estimations. On the other hand, forest biomass uncertainty analysis research has only recently obtained sufficient attention due to the difficulty in collecting reference data. This paper provides a brief overview of current forest biomass estimation methods using both TM and LiDAR data. A case study is then presented that demonstrates the forest biomass estimation methods and uncertainty analysis. Results indicate that Landsat TM data can provide adequate biomass estimates for secondary succession but are not suitable for mature forest biomass estimates due to data saturation problems. LiDAR can overcome TM’s shortcoming providing better biomass estimation performance but has not been extensively applied in practice due to data availability constraints. The uncertainty analysis indicates that various sources affect the performance of forest biomass/carbon estimation. With that said, the clear dominate sources of uncertainty are the variation of input sample plot data and data saturation problem related to optical sensors. A possible solution to increasing the confidence in forest biomass estimates is to integrate the strengths of multisensor data.

  15. Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4.

    Science.gov (United States)

    Zhu, Meixia; Cai, Dan; Leung, Ada W S

    2017-01-01

    Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving) in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices), working memory (N-back), attention (expressive attention), and mathematical skills (calculation fluency and math problem-solving). The results showed that in grade 2, number line estimation correlated significantly with calculation fluency ( r = -0.27, p problem-solving ( r = -0.52, p problem-solving ( r = -0.38, p problem-solving (12.0%) and calculation fluency (4.0%) after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0%) but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children's mathematical skills.

  16. State estimation for large-scale wastewater treatment plants.

    Science.gov (United States)

    Busch, Jan; Elixmann, David; Kühl, Peter; Gerkens, Carine; Schlöder, Johannes P; Bock, Hans G; Marquardt, Wolfgang

    2013-09-01

    Many relevant process states in wastewater treatment are not measurable, or their measurements are subject to considerable uncertainty. This poses a serious problem for process monitoring and control. Model-based state estimation can provide estimates of the unknown states and increase the reliability of measurements. In this paper, an integrated approach is presented for the optimization-based sensor network design and the estimation problem. Using the ASM1 model in the reference scenario BSM1, a cost-optimal sensor network is designed and the prominent estimators EKF and MHE are evaluated. Very good estimation results for the system comprising 78 states are found requiring sensor networks of only moderate complexity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Number Line Estimation Predicts Mathematical Skills: Difference in Grades 2 and 4

    Directory of Open Access Journals (Sweden)

    Meixia Zhu

    2017-09-01

    Full Text Available Studies have shown that number line estimation is important for learning. However, it is yet unclear if number line estimation predicts different mathematical skills in different grades after controlling for age, non-verbal cognitive ability, attention, and working memory. The purpose of this study was to examine the role of number line estimation on two mathematical skills (calculation fluency and math problem-solving in grade 2 and grade 4. One hundred and forty-eight children from Shanghai, China were assessed on measures of number line estimation, non-verbal cognitive ability (non-verbal matrices, working memory (N-back, attention (expressive attention, and mathematical skills (calculation fluency and math problem-solving. The results showed that in grade 2, number line estimation correlated significantly with calculation fluency (r = -0.27, p < 0.05 and math problem-solving (r = -0.52, p < 0.01. In grade 4, number line estimation correlated significantly with math problem-solving (r = -0.38, p < 0.01, but not with calculation fluency. Regression analyses indicated that in grade 2, number line estimation accounted for unique variance in math problem-solving (12.0% and calculation fluency (4.0% after controlling for the effects of age, non-verbal cognitive ability, attention, and working memory. In grade 4, number line estimation accounted for unique variance in math problem-solving (9.0% but not in calculation fluency. These findings suggested that number line estimation had an important role in math problem-solving for both grades 2 and 4 children and in calculation fluency for grade 2 children. We concluded that number line estimation could be a useful indicator for teachers to identify and improve children’s mathematical skills.

  18. A Note on optimal estimation in the presence of outliers

    Directory of Open Access Journals (Sweden)

    John N. Haddad

    2017-06-01

    Full Text Available Haddad, J. 2017. A Note on optimal estimation in the presence of outliers. Lebanese Science Journal, 18(1: 136-141. The basic estimation problem of the mean and standard deviation of a random normal process in the presence of an outlying observation is considered. The value of the outlier is taken as a constraint imposed on the maximization problem of the log likelihood. It is shown that the optimal solution of the maximization problem exists and expressions for the estimates are given. Applications to estimation in the presence of outliers and outlier detection are discussed and illustrated through a simulation study and analysis of trade data

  19. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    Science.gov (United States)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  20. Radon: A health problem and a communication problem

    International Nuclear Information System (INIS)

    Johnson, R.H.

    1992-01-01

    The US Environmental Protection Agency (USEPA) is making great efforts to alert the American public to the potential health risks of radon in homes. The news media have widely publicized radon as a problem; state and local governments are responding to public alarms; and hundreds of radon open-quotes expertsclose quotes are now offering radon detection and mitigation services. Apparently, USEPA's communication program is working, and the public is becoming increasingly concerned with radon. But are they concerned with radon as a open-quotes healthclose quotes problem in the way USEPA intended? The answer is yes, partly. More and more, however, the concerns are about home resale values. Many homebuyers are now deciding whether to buy on the basis of a single radon screening measurement, comparing it with USEPA's action guide of 4 pCi L -1 . They often conclude that 3.9 is OK, but 4.1 is not. Here is where the communication problems begin. The public largely misunderstands the significance of USEPA's guidelines and the meaning of screening measurements. Seldom does anyone inquire about the quality of the measurements, or the results of USEPA performance testing? Who asks about the uncertainty of lifetime exposure assessments based on a 1-hour, 1-day, 3-day, or even 30-day measurement? Who asks about the uncertainty of USEPA's risk estimates? Fortunately, an increasing number of radiation protection professions are asking such questions. They find that USEPA's risk projections are based on many assumptions which warrant further evaluation, particularly with regard to the combined risks of radon and cigarette-smoking. This is the next communication problem. What are these radiation professions doing to understand the bases for radon health-risk projections? Who is willing to communicate a balanced perspective to the public? Who is willing to communicate the uncertainty and conservatism in radon measurements and risk estimates?

  1. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  2. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  3. Applications of elliptic Carleman inequalities to Cauchy and inverse problems

    CERN Document Server

    Choulli, Mourad

    2016-01-01

    This book presents a unified approach to studying the stability of both elliptic Cauchy problems and selected inverse problems. Based on elementary Carleman inequalities, it establishes three-ball inequalities, which are the key to deriving logarithmic stability estimates for elliptic Cauchy problems and are also useful in proving stability estimates for certain elliptic inverse problems. The book presents three inverse problems, the first of which consists in determining the surface impedance of an obstacle from the far field pattern. The second problem investigates the detection of corrosion by electric measurement, while the third concerns the determination of an attenuation coefficient from internal data, which is motivated by a problem encountered in biomedical imaging.

  4. Endocrine disrupting chemicals: harmful substances and how to test them

    Directory of Open Access Journals (Sweden)

    Olea-Serrano Nicolás

    2002-01-01

    Full Text Available This paper presents an analysis of the opinions of different groups from: scientists, international regulatory bodies, non-governmental organizations and industry; with an interest in the problem of identifying chemical substances with endocrine disrupting activity. There is also discussion of the consequences that exposure to endocrine disruptors may have for human health, considering concrete issues related to: the estimation of risk; the tests that must be used to detect endocrine disruption; the difficulties to establish an association between dose, time of exposure, individual susceptibility, and effect; and the attempts to create a census of endocrine disruptors. Finally, it is proposed that not all hormonal mimics should be included under the single generic denomination of endocrine disruptors.

  5. PROBLEMAS DE ESTIMACIÓN DE MAGNITUDES NO ALCANZABLES: ESTRATEGIAS Y ÉXITO EN LA RESOLUCIÓN (Unreachable Magnitude Estimation Problems: Strategies and Solving Success

    Directory of Open Access Journals (Sweden)

    Núria Gorgorió

    2013-03-01

    Full Text Available Llamamos problemas de Fermi a aquellos problemas que, siendo de difícil resolución, admiten una aproximación a su solución a base de romper el problema en partes más pequeñas y resolverlas por separado. En este artículo presentamos los problemas de estimación de magnitudes no alcanzables (PEMNA como un subconjunto de los problemas de Fermi. A partir de los datos recopilados en un estudio hecho con alumnos de 12 a 16 años, caracterizamos las distintas estrategias de resolución propuestas por estos y discutimos sobre la potencialidad de estas estrategias para resolver los problemas con éxito. Fermi problems are problems that, being difficult to solve, can be satisfactorily solved if they are broken down into smaller pieces that are solved separately. In this article, we present inconceivable magnitude estimation problems as a subset of Fermi problems. Based on data collected from a study carried out with 12 to 16 years old students, we describe the different strategies for solving the problems that were proposed by the students, and discuss the potential of these strategies to successfully solve the problems.

  6. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  7. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  8. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  9. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  10. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  11. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-19

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations of blood mass density. The method is described and its performance is assessed through some numerical simulations. The robustness of the method in presence of noise is also studied.

  12. Spectral nodal methodology for multigroup slab-geometry discrete ordinates neutron transport problems with linearly anisotropic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, Amaury M.; Filho, Hermes A.; Silva, Davi M.; Garcia, Carlos R., E-mail: aoliva@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: davijmsilva@yahoo.com.br, E-mail: cgh@instec.cu [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Departamento de Modelagem Computacional; Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)

    2017-07-01

    In this paper, we propose a numerical methodology for the development of a method of the spectral nodal class that will generate numerical solutions free from spatial truncation errors. This method, denominated Spectral Deterministic Method (SDM), is tested as an initial study of the solutions (spectral analysis) of neutron transport equations in the discrete ordinates (S{sub N}) formulation, in one-dimensional slab geometry, multigroup approximation, with linearly anisotropic scattering, considering homogeneous and heterogeneous domains with fixed source. The unknowns in the methodology are the cell-edge, and cell average angular fluxes, the numerical values calculated for these quantities coincide with the analytic solution of the equations. These numerical results are shown and compared with the traditional ne- mesh method Diamond Difference (DD) and the coarse-mesh method spectral Green's function (SGF) to illustrate the method's accuracy and stability. The solution algorithms problems are implemented in a computer simulator made in C++ language, the same that was used to generate the results of the reference work. (author)

  13. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  14. Waring's Problem and the Circle Method

    Indian Academy of Sciences (India)

    other interests include classical music and mountaineering. the problems they worked on. Their proof of a slightly. Keywords weaker form of Ramanujan's original formula was pub-. Waring's problem, circle method, ... arc is fairly simple while it is the minor arc estimation that accounts for the 'major' amount of work involved!

  15. A multidimensional continued fraction and some of its statistical properties

    International Nuclear Information System (INIS)

    Baldwin, P.R.

    1992-01-01

    The problem of simultaneously approximating a vector of irrational numbers with rationals is analyzed in a geometrical setting using notions of dynamical systems theory. The author discusses here a (vectorial) multidimensional continued-fraction algorithm (MCFA) of additive type, the generalized mediant algorithm (GMA), and gives a geometrical interpretation to it. He calculates the invariant measure of the GMA shift as well as its Kolmogorov-Sinai (KS) entropy for arbitrary number of irrationals. The KS entropy is related to the growth rate of denominators of the Euclidean algorithm. This is the first analytical calculation of the growth rate of denominators for any MCFA

  16. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  17. Pollution Problem in River Kabul: Accumulation Estimates of Heavy Metals in Native Fish Species.

    Science.gov (United States)

    Ahmad, Habib; Yousafzai, Ali Muhammad; Siraj, Muhammad; Ahmad, Rashid; Ahmad, Israr; Nadeem, Muhammad Shahid; Ahmad, Waqar; Akbar, Nazia; Muhammad, Khushi

    2015-01-01

    The contamination of aquatic systems with heavy metals is affecting the fish population and hence results in a decline of productivity rate. River Kabul is a transcountry river originating at Paghman province in Afghanistan and inters in Khyber Pakhtunkhwa province of Pakistan and it is the major source of irrigation and more than 54 fish species have been reported in the river. Present study aimed at the estimation of heavy metals load in the fish living in River Kabul. Heavy metals including chromium, nickel, copper, zinc, cadmium, and lead were determined through atomic absorption spectrophotometer after tissue digestion by adopting standard procedures. Concentrations of these metals were recorded in muscles and liver of five native fish species, namely, Wallago attu, Aorichthys seenghala, Cyprinus carpio, Labeo dyocheilus, and Ompok bimaculatus. The concentrations of chromium, nickel, copper, zinc, and lead were higher in both of the tissues, whereas the concentration of cadmium was comparatively low. However, the concentration of metals was exceeding the RDA (Recommended Dietary Allowance of USA) limits. Hence, continuous fish consumption may create health problems for the consumers. The results of the present study are alarming and suggest implementing environmental laws and initiation of a biomonitoring program of the river.

  18. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-10

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data estimation problems for Laplace equation over the 3D domain. The solution of each of these boundary estimation problems involves writing down the mathematical problem in state-space-like representation using one of the space variables as time-like. First, system observability result for 3D boundary estimation problem is recalled in an infinite dimensional setting. Then, based on the observability result, the boundary estimation problem is decomposed into a set of independent 2D sub-problems. These 2D problems are then solved using an iterative observer to obtain the solution. Theoretical results are provided. The method is implemented numerically using finite difference discretization schemes. Numerical illustrations along with simulation results are provided.

  19. Coefficient Estimate Problem for a New Subclass of Biunivalent Functions

    OpenAIRE

    N. Magesh; T. Rosy; S. Varma

    2013-01-01

    We introduce a unified subclass of the function class Σ of biunivalent functions defined in the open unit disc. Furthermore, we find estimates on the coefficients |a2| and |a3| for functions in this subclass. In addition, many relevant connections with known or new results are pointed out.

  20. Estimates of excess medically attended acute respiratory infections in periods of seasonal and pandemic influenza in Germany from 2001/02 to 2010/11.

    Directory of Open Access Journals (Sweden)

    Matthias An der Heiden

    Full Text Available BACKGROUND: The number of patients seeking health care is a central indicator that may serve several different purposes: (1 as a proxy for the impact on the burden of the primary care system; (2 as a starting point to estimate the number of persons ill with influenza; (3 as the denominator data for the calculation of case fatality rate and the proportion hospitalized (severity indicators; (4 for economic calculations. In addition, reliable estimates of burden of disease and on the health care system are essential to communicate the impact of influenza to health care professionals, public health professionals and to the public. METHODOLOGY/PRINCIPAL FINDINGS: Using German syndromic surveillance data, we have developed a novel approach to describe the seasonal variation of medically attended acute respiratory infections (MAARI and estimate the excess MAARI attributable to influenza. The weekly excess inside a period of influenza circulation is estimated as the difference between the actual MAARI and a MAARI-baseline, which is established using a cyclic regression model for counts. As a result, we estimated the highest ARI burden within the last 10 years for the influenza season 2004/05 with an excess of 7.5 million outpatient visits (CI95% 6.8-8.0. In contrast, the pandemic wave 2009 accounted for one third of this burden with an excess of 2.4 million (CI95% 1.9-2.8. Estimates can be produced for different age groups, different geographic regions in Germany and also in real time during the influenza waves.

  1. Local heat transfer estimation in microchannels during convective boiling under microgravity conditions: 3D inverse heat conduction problem using BEM techniques

    Science.gov (United States)

    Luciani, S.; LeNiliot, C.

    2008-11-01

    Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).

  2. The estimation of small probabilities and risk assessment

    International Nuclear Information System (INIS)

    Kalbfleisch, J.D.; Lawless, J.F.; MacKay, R.J.

    1982-01-01

    The primary contribution of statistics to risk assessment is in the estimation of probabilities. Frequently the probabilities in question are small, and their estimation is particularly difficult. The authors consider three examples illustrating some problems inherent in the estimation of small probabilities

  3. New population and life expectancy estimates for the Indigenous population of Australia's Northern Territory, 1966-2011.

    Directory of Open Access Journals (Sweden)

    Tom Wilson

    Full Text Available The Indigenous population of Australia suffers considerable disadvantage across a wide range of socio-economic indicators, and is therefore the focus of many policy initiatives attempting to 'close the gap' between Indigenous and non-Indigenous Australians. Unfortunately, past population estimates have proved unreliable as denominators for these indicators. The aim of the paper is to contribute more robust estimates for the Northern Territory Indigenous population for the period 1966-2011, and hence estimate one of the most important of socio-economic indicators, life expectancy at birth.A consistent time series of population estimates from 1966 to 2011, based off the more reliable 2011 official population estimates, was created by a mix of reverse and forward cohort survival. Adjustments were made to ensure sensible sex ratios and consistency with recent birth registrations. Standard life table methods were employed to estimate life expectancy. Drawing on an approach from probabilistic forecasting, confidence intervals surrounding population numbers and life expectancies were estimated.The Northern Territory Indigenous population in 1966 numbered between 23,800 and 26,100, compared to between 66,100 and 73,200 in 2011. In 1966-71 Indigenous life expectancy at birth lay between 49.1 and 56.9 years for males and between 49.7 and 57.9 years for females, whilst by 2006-11 it had increased to between 60.5 and 66.2 years for males and between 65.4 and 70.8 for females. Over the last 40 years the gap with all-Australian life expectancy has not narrowed, fluctuating at about 17 years for both males and females. Whilst considerable progress has been made in closing the gap in under-five mortality, at most other ages the mortality rate differential has increased.A huge public health challenge remains. Efforts need to be redoubled to reduce the large gap in life expectancy between Indigenous and non-Indigenous Australians.

  4. Portfolio optimization and the random magnet problem

    Science.gov (United States)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  5. Practical global oceanic state estimation

    Science.gov (United States)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  6. Mobile robot motion estimation using Hough transform

    Science.gov (United States)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  7. Observer variability in estimating numbers: An experiment

    Science.gov (United States)

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  8. Interdisciplinary collaboration in gerontology and geriatrics in Latin America: conceptual approaches and health care teams.

    Science.gov (United States)

    Gomez, Fernando; Curcio, Carmen Lucia

    2013-01-01

    The underlying rationale to support interdisciplinary collaboration in geriatrics and gerontology is based on the complexity of elderly care. The most important characteristic about interdisciplinary health care teams for older people in Latin America is their subjective-basis framework. In other regions, teams are organized according to a theoretical knowledge basis with well-justified priorities, functions, and long-term goals, in Latin America teams are arranged according to subjective interests on solving their problems. Three distinct approaches of interdisciplinary collaboration in gerontology are proposed. The first approach is grounded in the scientific rationalism of European origin. Denominated "logical-rational approach," its core is to identify the significance of knowledge. The second approach is grounded in pragmatism and is more associated with a North American tradition. The core of this approach consists in enhancing the skills and competences of each participant; denominated "logical-instrumental approach." The third approach denominated "logical-subjective approach" has a Latin America origin. Its core consists in taking into account the internal and emotional dimensions of the team. These conceptual frameworks based in geographical contexts will permit establishing the differences and shared characteristics of interdisciplinary collaboration in geriatrics and gerontology to look for operational answers to solve the "complex problems" of older adults.

  9. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  10. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  11. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  12. Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Feten Gannouni

    2017-01-01

    Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.

  13. Solving Math Problems Approximately: A Developmental Perspective.

    Directory of Open Access Journals (Sweden)

    Dana Ganor-Stern

    Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.

  14. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  15. YouTube Fridays: Student Led Development of Engineering Estimate Problems

    Science.gov (United States)

    Liberatore, Matthew W.; Vestal, Charles R.; Herring, Andrew M.

    2012-01-01

    YouTube Fridays devotes a small fraction of class time to student-selected videos related to the course topic, e.g., thermodynamics. The students then write and solve a homework-like problem based on the events in the video. Three recent pilots involving over 300 students have developed a database of videos and questions that reinforce important…

  16. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  17. Estimating Rates of Psychosocial Problems in Urban and Poor Children with Sickle Cell Anemia.

    Science.gov (United States)

    Barbarin, Oscar A.; And Others

    1994-01-01

    Examined adjustment problems for children and adolescents with sickle cell anemia (SCA). Parents provided information on social, emotional, academic, and family adjustment of 327 children with SCA. Over 25% of children had emotional adjustment problems in form of internalizing symptoms (anxiety and depression); at least 20% had problems related to…

  18. Statistical significant change versus relevant or important change in (quasi) experimental design : some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research

    NARCIS (Netherlands)

    Middel, Berrie; van Sonderen, Eric

    2002-01-01

    This paper aims to identify problems in estimating and the interpretation of the magnitude of intervention-related change over time or responsiveness assessed with health outcome measures. Responsiveness is a problematic construct and there is no consensus on how to quantify the appropriate index to

  19. Fractional kalman filter to estimate the concentration of air pollution

    Science.gov (United States)

    Vita Oktaviana, Yessy; Apriliani, Erna; Khusnul Arif, Didik

    2018-04-01

    Air pollution problem gives important effect in quality environment and quality of human’s life. Air pollution can be caused by nature sources or human activities. Pollutant for example Ozone, a harmful gas formed by NOx and volatile organic compounds (VOCs) emitted from various sources. The air pollution problem can be modeled by TAPM-CTM (The Air Pollution Model with Chemical Transport Model). The model shows concentration of pollutant in the air. Therefore, it is important to estimate concentration of air pollutant. Estimation method can be used for forecast pollutant concentration in future and keep stability of air quality. In this research, an algorithm is developed, based on Fractional Kalman Filter to solve the model of air pollution’s problem. The model will be discretized first and then it will be estimated by the method. The result shows that estimation of Fractional Kalman Filter has better accuracy than estimation of Kalman Filter. The accuracy was tested by applying RMSE (Root Mean Square Error).

  20. Generalized shrunken type-GM estimator and its application

    International Nuclear Information System (INIS)

    Ma, C Z; Du, Y L

    2014-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points

  1. Generalized shrunken type-GM estimator and its application

    Science.gov (United States)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  2. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo

    2014-01-01

    that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper

  3. Geodynamic Effects of Ocean Tides: Progress and Problems

    Science.gov (United States)

    Richard, Ray

    1999-01-01

    Satellite altimetry, particularly Topex/Poseidon, has markedly improved our knowledge of global tides, thereby allowing significant progress on some longstanding problems in geodynamics. This paper reviews some of that progress. Emphasis is given to global-scale problems, particularly those falling within the mandate of the new IERS Special Bureau for Tides: angular momentum, gravitational field, geocenter motion. For this discussion I use primarily the new ocean tide solutions GOT99.2, CSR4.0, and TPXO.4 (for which G. Egbert has computed inverse-theoretic error estimates), and I concentrate on new results in angular momentum and gravity and their solid-earth implications. One example is a new estimate of the effective tidal Q at the M_2 frequency, based on combining these ocean models with tidal estimates from satellite laser ranging. Three especially intractable problems are also addressed: (1) determining long-period tides in the Arctic [large unknown effect on the inertia tensor, particularly for Mf]; (2) determining the global psi_l tide [large unknown effect on interpretations of gravimetry for the near-diurnal free wobble]; and (3) determining radiational tides [large unknown temporal variations at important frequencies]. Problems (2) and (3) are related.

  4. Developmental and Individual Differences in Pure Numerical Estimation

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…

  5. Bayesian estimation applied to multiple species

    International Nuclear Information System (INIS)

    Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee A.

    2007-01-01

    Observed data are often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian estimation applied to multiple species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being ''pure'' is known. We discuss the application of BEAMS to future type-Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae light curves without spectra. The multiband light curves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, , of being Ia, BEAMS delivers parameter constraints equal to N spectroscopically confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. the type-Ibc and II distributions). Hence BEAMS provides a unified classification and parameter estimation methodology which may be useful in a diverse range of problems such as photometric redshift estimation or, indeed, any parameter estimation problem where contamination is an issue

  6. Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....

  7. Estimation of presampling MTF in CR systems by using direct fluorescence and its problems

    International Nuclear Information System (INIS)

    Ono, Kitahei; Inatsu, Hiroshi; Harao, Mototsugu; Itonaga, Haruo; Miyamoto, Hideyuki

    2001-01-01

    We proposed a method for practical estimation of the presampling modulation transfer function (MTF) of a computed radiography (CR) system by using the MTFs of an imaging plate and the sampling aperture. The MTFs of three imaging plates (GP-25, ST-VN, and RP-1S) with different photostimulable phosphors were measured by using direct fluorescence (the light emitted instantaneously by x-ray exposure), and the presampling MTFs were estimated from these imaging plate MTFs and the sampling aperture MTF. Our results indicated that for imaging plate RP-1S the measured presampling MTF was significantly superior to the estimated presampling MTF at any spatial frequency. This was because the estimated presampling MTF was degraded by the diffusion of direct fluorescence in the protective layer of the imaging plate's surface. Therefore, when the presampling MTF of the imaging plate with a thick protective layer is estimated, correction for the thickness of the protective layer should be carried out. However, the estimated presampling MTF of imaging plates with a thin protective layer were almost the same as the measured presampling MTF, except in the high spatial frequency range. Therefore, we consider this estimation method to be useful and practical, because the spatial resolution property of a CR system can be obtained simply from the imaging plate MTF measured with direct fluorescence. (author)

  8. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  9. Teaching problem solving: Don't forget the problem solver(s)

    Science.gov (United States)

    Ranade, Saidas M.; Corrales, Angela

    2013-05-01

    The importance of intrapersonal and interpersonal intelligences has long been known but educators have debated whether to and how to incorporate those topics in an already crowded engineering curriculum. In 2010, the authors used the classroom as a laboratory to observe the usefulness of including selected case studies and exercises from the fields of neurology, artificial intelligence, cognitive sciences and social psychology in a new problem-solving course. To further validate their initial findings, in 2012, the authors conducted an online survey of engineering students and engineers. The main conclusion is that engineering students will benefit from learning more about the impact of emotions, culture, diversity and cognitive biases when solving problems. Specifically, the work shows that an augmented problem-solving curriculum needs to include lessons on labelling emotions and cognitive biases, 'evidence-based' data on the importance of culture and diversity and additional practice on estimating conditional probability.

  10. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  11. Estimation of quasi-critical reactivity

    International Nuclear Information System (INIS)

    Racz, A.

    1992-02-01

    The bank of Kalman filter method for reactivity and neutron density estimation originally suggested by D'Attellis and Cortina is critically overviewed. It is pointed out that the procedure cannot be applied reliably in such a form as the authors proposed, due to the filter divegence. An improved method, which is free from devergence problems are presented, as well. A new estimation technique is proposed and tested using computer simulation results. The procedure is applied for the estimation of small reactivity changes. (R.P.) 9 refs.; 2 figs.; 2 tabs

  12. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  13. Estimation of Faults in DC Electrical Power System

    Science.gov (United States)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  14. Identification of the Thermophysical Properties of the Soil by Inverse Problem

    OpenAIRE

    Mansour , Salwa; Canot , Édouard; Muhieddine , Mohamad

    2016-01-01

    International audience; This paper introduces a numerical strategy to estimate the thermophysical properties of a saturated porous medium (volumetric heat capacity (ρC)s , thermal conductivity λs and porosity φ) where a phase change problem (liquid/vapor) appears due strong heating. The estimation of these properties is done by inverse problem knowing the heating curves at selected points of the medium. To solve the inverse problem, we use both the Damped Gauss Newton and the Levenberg Marqua...

  15. Partial correlation matrix estimation using ridge penalty followed by thresholding and re-estimation.

    Science.gov (United States)

    Ha, Min Jin; Sun, Wei

    2014-09-01

    Motivated by the problem of construction of gene co-expression network, we propose a statistical framework for estimating high-dimensional partial correlation matrix by a three-step approach. We first obtain a penalized estimate of a partial correlation matrix using ridge penalty. Next we select the non-zero entries of the partial correlation matrix by hypothesis testing. Finally we re-estimate the partial correlation coefficients at these non-zero entries. In the second step, the null distribution of the test statistics derived from penalized partial correlation estimates has not been established. We address this challenge by estimating the null distribution from the empirical distribution of the test statistics of all the penalized partial correlation estimates. Extensive simulation studies demonstrate the good performance of our method. Application on a yeast cell cycle gene expression data shows that our method delivers better predictions of the protein-protein interactions than the Graphic Lasso. © 2014, The International Biometric Society.

  16. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  17. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  18. Assessment of psychosocial problems in children with type 1 diabetes and their families: the added value of using standardised questionnaires in addition to clinical estimations of nurses and paediatricians

    NARCIS (Netherlands)

    Boogerd, E.A.; Damhuis, A.M.A.; Velden, J.A.M. van der; Steeghs, M.C.C.H.; Noordam, C.; Verhaak, C.M.; Vermaes, I.P.

    2015-01-01

    AIMS AND OBJECTIVES: To investigate the assessment of psychosocial problems in children with type 1 diabetes by means of clinical estimations made by nurses and paediatricians and by using standardised questionnaires. BACKGROUND: Although children with type 1 diabetes and their parents show

  19. Assessment of psychosocial problems in children with type 1 diabetes and their families: The added value of using standardised questionnaires in addition to clinical estimations of nurses and paediatricians

    NARCIS (Netherlands)

    Boogerd, E.A.; Damhuis, A.M.A.; Alfen-van der Velden, A.A.E.M. van; Steeghs, M.C.C.H.; Noordam, C.; Verhaak, C.M.; Vermaes, I.P.R.

    2015-01-01

    Aims and objectives: To investigate the assessment of psychosocial problems in children with type 1 diabetes by means of clinical estimations made by nurses and paediatricians and by using standardised questionnaires. Background Although children with type 1 diabetes and their parents show increased

  20. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  1. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  2. Joint Sparsity and Frequency Estimation for Spectral Compressive Sensing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    various interpolation techniques to estimate the continuous frequency parameters. In this paper, we show that solving the problem in a probabilistic framework instead produces an asymptotically efficient estimator which outperforms existing methods in terms of estimation accuracy while still having a low...

  3. Problem-solving intervention for caregivers of children with mental health problems.

    Science.gov (United States)

    Gerkensmeyer, Janis E; Johnson, Cynthia S; Scott, Eric L; Oruche, Ukamaka M; Lindsey, Laura M; Austin, Joan K; Perkins, Susan M

    2013-06-01

    Building Our Solutions and Connections (BOSC) focused on enhancing problem-solving skills (PSS) of primary caregivers of children with mental health problems. Aims were determining feasibility, acceptability, and effect size (ES) estimates for depression, burden, personal control, and PSS. Caregivers were randomized to BOSC (n=30) or wait-list control (WLC) groups (n=31). Data were collected at baseline, post-intervention, and 3 and 6 months post-intervention. Three-months post-intervention, ES for burden and personal control were .07 and .08, respectively. ES for depressed caregivers for burden and personal control were 0.14 and 0.19, respectively. Evidence indicates that the intervention had desired effects. Published by Elsevier Inc.

  4. Frequency-Domain Joint Motion and Disparity Estimation Using Steerable Filters

    Directory of Open Access Journals (Sweden)

    Dimitrios Alexiadis

    2018-02-01

    Full Text Available In this paper, the problem of joint disparity and motion estimation from stereo image sequences is formulated in the spatiotemporal frequency domain, and a novel steerable filter-based approach is proposed. Our rationale behind coupling the two problems is that according to experimental evidence in the literature, the biological visual mechanisms for depth and motion are not independent of each other. Furthermore, our motivation to study the problem in the frequency domain and search for a filter-based solution is based on the fact that, according to early experimental studies, the biological visual mechanisms can be modelled based on frequency-domain or filter-based considerations, for both the perception of depth and the perception of motion. The proposed framework constitutes the first attempt to solve the joint estimation problem through a filter-based solution, based on frequency-domain considerations. Thus, the presented ideas provide a new direction of work and could be the basis for further developments. From an algorithmic point of view, we additionally extend state-of-the-art ideas from the disparity estimation literature to handle the joint disparity-motion estimation problem and formulate an algorithm that is evaluated through a number of experimental results. Comparisons with state-of-the-art-methods demonstrate the accuracy of the proposed approach.

  5. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.

    2015-11-21

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  6. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.; Ayuso de Dios, Blanca; Mazzieri, Ilario; Quarteroni, Alfio

    2015-01-01

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  7. a comparative study of some robust ridge and liu estimators

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S. Estimators, are preferred. To handle these two problems jointly, the study combines the Ridge and Liu Estimators with Robust.

  8. Obtaining sparse distributions in 2D inverse problems

    OpenAIRE

    Reci, A; Sederman, Andrew John; Gladden, Lynn Faith

    2017-01-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxat...

  9. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  10. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  11. Efficient AM Algorithms for Stochastic ML Estimation of DOA

    Directory of Open Access Journals (Sweden)

    Haihua Chen

    2016-01-01

    Full Text Available The estimation of direction-of-arrival (DOA of signals is a basic and important problem in sensor array signal processing. To solve this problem, many algorithms have been proposed, among which the Stochastic Maximum Likelihood (SML is one of the most concerned algorithms because of its high accuracy of DOA. However, the estimation of SML generally involves the multidimensional nonlinear optimization problem. As a result, its computational complexity is rather high. This paper addresses the issue of reducing computational complexity of SML estimation of DOA based on the Alternating Minimization (AM algorithm. We have the following two contributions. First using transformation of matrix and properties of spatial projection, we propose an efficient AM (EAM algorithm by dividing the SML criterion into two components. One depends on a single variable parameter while the other does not. Second when the array is a uniform linear array, we get the irreducible form of the EAM criterion (IAM using polynomial forms. Simulation results show that both EAM and IAM can reduce the computational complexity of SML estimation greatly, while IAM is the best. Another advantage of IAM is that this algorithm can avoid the numerical instability problem which may happen in AM and EAM algorithms when more than one parameter converges to an identical value.

  12. Monte Carlo next-event point flux estimation for RCP01

    International Nuclear Information System (INIS)

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-01-01

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator's efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn't needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty

  13. A literature review of expert problem solving using analogy

    OpenAIRE

    Mair, C; Martincova, M; Shepperd, MJ

    2009-01-01

    We consider software project cost estimation from a problem solving perspective. Taking a cognitive psychological approach, we argue that the algorithmic basis for CBR tools is not representative of human problem solving and this mismatch could account for inconsistent results. We describe the fundamentals of problem solving, focusing on experts solving ill-defined problems. This is supplemented by a systematic literature review of empirical studies of expert problem solving of non-trivial pr...

  14. Stability estimates for solution of IBVP to fractional parabolic differential and difference equations

    Science.gov (United States)

    Ashyralyev, Allaberen; Cakir, Zafer

    2016-08-01

    In this work, we investigate initial-boundary value problems for fractional parabolic equations with the Neumann boundary condition. Stability estimates for the solution of this problem are established. Difference schemes for approximate solution of initial-boundary value problem are constructed. Furthermore, we give theorem on coercive stability estimates for the solution of the difference schemes.

  15. Iterative Observer-based Estimation Algorithms for Steady-State Elliptic Partial Differential Equation Systems

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-19

    Steady-state elliptic partial differential equations (PDEs) are frequently used to model a diverse range of physical phenomena. The source and boundary data estimation problems for such PDE systems are of prime interest in various engineering disciplines including biomedical engineering, mechanics of materials and earth sciences. Almost all existing solution strategies for such problems can be broadly classified as optimization-based techniques, which are computationally heavy especially when the problems are formulated on higher dimensional space domains. However, in this dissertation, feedback based state estimation algorithms, known as state observers, are developed to solve such steady-state problems using one of the space variables as time-like. In this regard, first, an iterative observer algorithm is developed that sweeps over regular-shaped domains and solves boundary estimation problems for steady-state Laplace equation. It is well-known that source and boundary estimation problems for the elliptic PDEs are highly sensitive to noise in the data. For this, an optimal iterative observer algorithm, which is a robust counterpart of the iterative observer, is presented to tackle the ill-posedness due to noise. The iterative observer algorithm and the optimal iterative algorithm are then used to solve source localization and estimation problems for Poisson equation for noise-free and noisy data cases respectively. Next, a divide and conquer approach is developed for three-dimensional domains with two congruent parallel surfaces to solve the boundary and the source data estimation problems for the steady-state Laplace and Poisson kind of systems respectively. Theoretical results are shown using a functional analysis framework, and consistent numerical simulation results are presented for several test cases using finite difference discretization schemes.

  16. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  17. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  18. Estimating the Number of Heterosexual Persons in the United States to Calculate National Rates of HIV Infection.

    Directory of Open Access Journals (Sweden)

    Amy Lansky

    Full Text Available This study estimated the proportions and numbers of heterosexuals in the United States (U.S. to calculate rates of heterosexually acquired human immunodeficiency virus (HIV infection. Quantifying the burden of disease can inform effective prevention planning and resource allocation.Heterosexuals were defined as males and females who ever had sex with an opposite-sex partner and excluded those with other HIV risks: persons who ever injected drugs and males who ever had sex with another man. We conducted meta-analysis using data from 3 national probability surveys that measured lifetime (ever sexual activity and injection drug use among persons aged 15 years and older to estimate the proportion of heterosexuals in the United States population. We then applied the proportion of heterosexual persons to census data to produce population size estimates. National HIV infection rates among heterosexuals were calculated using surveillance data (cases attributable to heterosexual contact in the numerators and the heterosexual population size estimates in the denominators.Adult and adolescent heterosexuals comprised an estimated 86.7% (95% confidence interval: 84.1%-89.3% of the U.S. population. The estimate for males was 84.1% (CI: 81.2%-86.9% and for females was 89.4% (95% CI: 86.9%-91.8%. The HIV diagnosis rate for 2013 was 5.2 per 100,000 heterosexuals and the rate of persons living with diagnosed HIV infection in 2012 was 104 per 100,000 heterosexuals aged 13 years or older. Rates of HIV infection were >20 times as high among black heterosexuals compared to white heterosexuals, indicating considerable disparity. Rates among heterosexual men demonstrated higher disparities than overall population rates for men.The best available data must be used to guide decision-making for HIV prevention. HIV rates among heterosexuals in the U.S. are important additions to cost effectiveness and other data used to make critical decisions about resources for

  19. Asymptotic eigenvalue estimates for a Robin problem with a large parameter

    Czech Academy of Sciences Publication Activity Database

    Exner, Pavel; Minakov, A.; Parnovski, L.

    2014-01-01

    Roč. 71, č. 2 (2014), s. 141-156 ISSN 0032-5155 R&D Projects: GA ČR(CZ) GA14-06818S Institutional support: RVO:61389005 Keywords : Laplacian * Robin problem * eigenvalue asymptotics Subject RIV: BE - Theoretical Physics Impact factor: 0.250, year: 2014

  20. Estimating Focus and Radial Distances, and Fault Residuals from CD Player Sensor Signals by use of a Kalman Estimator

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Andersen, Palle

    2003-01-01

    Cross coupling between focus and radial loops in Compact Disc players is a problem both in nominal operations, but also in detection of defects such as scratches and finger prints. Using a Kalman estimator with an internal reference, the actual focus and radial distances are estimated. The sensor...

  1. HIV Incidence Estimates Using the Limiting Antigen Avidity EIA Assay at Testing Sites in Kiev City, Ukraine: 2013-2014.

    Directory of Open Access Journals (Sweden)

    Ruth Simmons

    Full Text Available To estimate HIV incidence and highlight the characteristics of persons at greatest risk of HIV in the Ukraine capital, Kiev.Residual samples from newly-diagnosed persons attending the Kiev City AIDS Centre were tested for evidence of recent HIV infection using an avidity assay. Questions on possible risk factors for HIV acquisition and testing history were introduced. All persons (≥16yrs presenting for an HIV test April'13-March'14 were included. Rates per 100,000 population were calculated using region-specific denominators.During the study period 6370 individuals tested for HIV. Of the 467 individuals newly-diagnosed with HIV, 21 had insufficient samples for LAg testing. Of the remaining 446, 39 (8.7% were classified as recent with an avidity index <1.5ODn, 10 were reclassified as long-standing as their viral load was <1000 copies/mL, resulting in 29 (6.5% recent HIV infections. The only independent predictor for a recent infection was probable route of exposure, with MSM more likely to present with a recent infection compared with heterosexual contact [Odds Ratio 8.86; 95%CI 2.65-29.60]. We estimated HIV incidence at 21.5 per 100,000 population, corresponding to 466 new infections. Using population estimates for MSM and PWID, incidence was estimated to be between 2289.6 and 6868.7/100,000 MSM, and 350.4 for PWID.A high proportion of persons newly-infected remain undiagnosed, with MSM disproportionally affected with one in four newly-HIV-diagnosed and one in three recently-HIV-infected. Our findings should be used for targeted public health interventions and health promotion.

  2. HIV Incidence Estimates Using the Limiting Antigen Avidity EIA Assay at Testing Sites in Kiev City, Ukraine: 2013-2014.

    Science.gov (United States)

    Simmons, Ruth; Malyuta, Ruslan; Chentsova, Nelli; Karnets, Iryna; Murphy, Gary; Medoeva, Antonia; Kruglov, Yuri; Yurchenko, Alexander; Copas, Andrew; Porter, Kholoud

    2016-01-01

    To estimate HIV incidence and highlight the characteristics of persons at greatest risk of HIV in the Ukraine capital, Kiev. Residual samples from newly-diagnosed persons attending the Kiev City AIDS Centre were tested for evidence of recent HIV infection using an avidity assay. Questions on possible risk factors for HIV acquisition and testing history were introduced. All persons (≥16yrs) presenting for an HIV test April'13-March'14 were included. Rates per 100,000 population were calculated using region-specific denominators. During the study period 6370 individuals tested for HIV. Of the 467 individuals newly-diagnosed with HIV, 21 had insufficient samples for LAg testing. Of the remaining 446, 39 (8.7%) were classified as recent with an avidity index <1.5ODn, 10 were reclassified as long-standing as their viral load was <1000 copies/mL, resulting in 29 (6.5%) recent HIV infections. The only independent predictor for a recent infection was probable route of exposure, with MSM more likely to present with a recent infection compared with heterosexual contact [Odds Ratio 8.86; 95%CI 2.65-29.60]. We estimated HIV incidence at 21.5 per 100,000 population, corresponding to 466 new infections. Using population estimates for MSM and PWID, incidence was estimated to be between 2289.6 and 6868.7/100,000 MSM, and 350.4 for PWID. A high proportion of persons newly-infected remain undiagnosed, with MSM disproportionally affected with one in four newly-HIV-diagnosed and one in three recently-HIV-infected. Our findings should be used for targeted public health interventions and health promotion.

  3. HIV Incidence Estimates Using the Limiting Antigen Avidity EIA Assay at Testing Sites in Kiev City, Ukraine: 2013-2014

    Science.gov (United States)

    Kruglov, Yuri; Yurchenko, Alexander

    2016-01-01

    Objective To estimate HIV incidence and highlight the characteristics of persons at greatest risk of HIV in the Ukraine capital, Kiev. Method Residual samples from newly-diagnosed persons attending the Kiev City AIDS Centre were tested for evidence of recent HIV infection using an avidity assay. Questions on possible risk factors for HIV acquisition and testing history were introduced. All persons (≥16yrs) presenting for an HIV test April’13–March’14 were included. Rates per 100,000 population were calculated using region-specific denominators. Results During the study period 6370 individuals tested for HIV. Of the 467 individuals newly-diagnosed with HIV, 21 had insufficient samples for LAg testing. Of the remaining 446, 39 (8.7%) were classified as recent with an avidity index <1.5ODn, 10 were reclassified as long-standing as their viral load was <1000 copies/mL, resulting in 29 (6.5%) recent HIV infections. The only independent predictor for a recent infection was probable route of exposure, with MSM more likely to present with a recent infection compared with heterosexual contact [Odds Ratio 8.86; 95%CI 2.65–29.60]. We estimated HIV incidence at 21.5 per 100,000 population, corresponding to 466 new infections. Using population estimates for MSM and PWID, incidence was estimated to be between 2289.6 and 6868.7/100,000 MSM, and 350.4 for PWID. Conclusion A high proportion of persons newly-infected remain undiagnosed, with MSM disproportionally affected with one in four newly-HIV-diagnosed and one in three recently-HIV-infected. Our findings should be used for targeted public health interventions and health promotion. PMID:27276170

  4. Gradient-type methods in inverse parabolic problems

    International Nuclear Information System (INIS)

    Kabanikhin, Sergey; Penenko, Aleksey

    2008-01-01

    This article is devoted to gradient-based methods for inverse parabolic problems. In the first part, we present a priori convergence theorems based on the conditional stability estimates for linear inverse problems. These theorems are applied to backwards parabolic problem and sideways parabolic problem. The convergence conditions obtained coincide with sourcewise representability in the self-adjoint backwards parabolic case but they differ in the sideways case. In the second part, a variational approach is formulated for a coefficient identification problem. Using adjoint equations, a formal gradient of an objective functional is constructed. A numerical test illustrates the performance of conjugate gradient algorithm with the formal gradient.

  5. Limitations and problems in deriving risk estimates for low-level radiation exposure

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1981-01-01

    Some of the problems in determining the cancer risk of low-level radiation from studies of exposed groups are reviewed and applied to the study of Hanford workers by Mancuso, Stewart, and Kneale. Problems considered are statistical limitations, variation of cancer rates with geography and race, the ''healthy worker effect,'' calendar year and age variation of cancer mortality, choosing from long lists, use of proportional mortality rates, cigarette smoking-cancer correlations, use of averages to represent data distributions, ignoring other data, and correlations between radiation exposure and other factors that may cause cancer. The current status of studies of the Hanford workers is reviewed

  6. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  7. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  8. Chain segmentation for the Monte Carlo solution of particle transport problems

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1984-01-01

    A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems

  9. Problems over Information Systems

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The problems of estimation of the minimum average time complexity of decision trees and design of efficient algorithms are complex in general case. The upper bounds described in Chap. 2.4.3 can not be applied directly due to large computational complexity of the parameter M(z). Under reasonable assumptions about the relation of P and NP, there are no polynomial time algorithms with good approximation ratio [12, 32]. One of the possible solutions is to consider particular classes of problems and improve the existing results using characteristics of the considered classes. © Springer-Verlag Berlin Heidelberg 2011.

  10. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  11. Interface depolarization field as common denominator of fatigue and size effect in Pb(Zr0.54Ti0.46)O3 ferroelectric thin film capacitors

    Science.gov (United States)

    Bouregba, R.; Sama, N.; Soyer, C.; Poullain, G.; Remiens, D.

    2010-05-01

    Dielectric, hysteresis and fatigue measurements are performed on Pb(Zr0.54Ti0.46)O3 (PZT) thin film capacitors with different thicknesses and different electrode configurations, using platinum and LaNiO3 conducting oxide. The data are compared with those collected in a previous work devoted to study of size effect by R. Bouregba et al., [J. Appl. Phys. 106, 044101 (2009)]. Deterioration of the ferroelectric properties, consecutive to fatigue cycling and thickness downscaling, presents very similar characteristics and allows drawing up a direct correlation between the two phenomena. Namely, interface depolarization field (Edep) resulting from interface chemistry is found to be the common denominator, fatigue phenomena is manifestation of strengthen of Edep in the course of time. Change in dielectric permittivity, in remnant and coercive values as well as in the shape of hysteresis loops are mediated by competition between degradation of dielectric properties of the interfaces and possible accumulation of interface space charge. It is proposed that presence in the band gap of trap energy levels with large time constant due to defects in small nonferroelectric regions at the electrode—PZT film interfaces ultimately governs the aging process. Size effect and aging process may be seen as two facets of the same underlying mechanism, the only difference lies in the observation time of the phenomena.

  12. Probabilistic formulation of estimation problems for a class of Hamilton-Jacobi equations

    KAUST Repository

    Hofleitner, Aude; Claudel, Christian G.; Bayen, Alexandre M.

    2012-01-01

    This article presents a method for deriving the probability distribution of the solution to a Hamilton-Jacobi partial differential equation for which the value conditions are random. The derivations lead to analytical or semi-analytical expressions of the probability distribution function at any point in the domain in which the solution is defined. The characterization of the distribution of the solution at any point is a first step towards the estimation of the parameters defining the random value conditions. This work has important applications for estimation in flow networks in which value conditions are noisy. In particular, we illustrate our derivations on a road segment with random capacity reductions. © 2012 IEEE.

  13. Probabilistic formulation of estimation problems for a class of Hamilton-Jacobi equations

    KAUST Repository

    Hofleitner, Aude

    2012-12-01

    This article presents a method for deriving the probability distribution of the solution to a Hamilton-Jacobi partial differential equation for which the value conditions are random. The derivations lead to analytical or semi-analytical expressions of the probability distribution function at any point in the domain in which the solution is defined. The characterization of the distribution of the solution at any point is a first step towards the estimation of the parameters defining the random value conditions. This work has important applications for estimation in flow networks in which value conditions are noisy. In particular, we illustrate our derivations on a road segment with random capacity reductions. © 2012 IEEE.

  14. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  15. A fast and automatically paired 2-D direction-of-arrival estimation with and without estimating the mutual coupling coefficients

    Science.gov (United States)

    Filik, Tansu; Tuncer, T. Engin

    2010-06-01

    A new technique is proposed for the solution of pairing problem which is observed when fast algorithms are used for two-dimensional (2-D) direction-of-arrival (DOA) estimation. Proposed method is integrated with array interpolation for efficient use of antenna elements. Two virtual arrays are generated which are positioned accordingly with respect to the real array. ESPRIT algorithm is used by employing both the real and virtual arrays. The eigenvalues of the rotational transformation matrix have the angle information at both magnitude and phase which allows the estimation of azimuth and elevation angles by using closed-form expressions. This idea is used to obtain the paired interpolated ESPRIT algorithm which can be applied for arbitrary arrays when there is no mutual coupling. When there is mutual coupling, two approaches are proposed in order to obtain 2-D paired DOA estimates. These blind methods can be applied for the array geometries which have mutual coupling matrices with a Toeplitz structure. The first approach finds the 2-D paired DOA angles without estimating the mutual coupling coefficients. The second approach estimates the coupling coefficients and iteratively improves both the coupling coefficients and the DOA estimates. It is shown that the proposed techniques solve the pairing problem for uniform circular arrays and effectively estimate the DOA angles in case of unknown mutual coupling.

  16. Problems and solutions in the estimation of genetic risks from radiation and chemicals

    International Nuclear Information System (INIS)

    Russell, W.L.

    1980-01-01

    Extensive investigations with mice on the effects of various physical and biological factors, such as dose rate, sex and cell stage, on radiation-induced mutation have provided an evaluation of the genetics hazards of radiation in man. The mutational results obtained in both sexes with progressive lowering of the radiation dose rate have permitted estimation of the mutation frequency expected under the low-level radiation conditions of most human exposure. Supplementing the studies on mutation frequency are investigations on the phenotypic effects of mutations in mice, particularly anatomical disorders of the skeleton, which allow an estimation of the degree of human handicap associated with the occurrence of parallel defects in man. Estimation of the genetic risk from chemical mutagens is much more difficult, and the research is much less advanced. Results on transmitted mutations in mice indicate a poor correlation with mutation induction in non-mammalian organisms

  17. Stability Estimates for h-p Spectral Element Methods for Elliptic Problems

    NARCIS (Netherlands)

    Dutt, Pravir; Tomar, S.K.; Kumar, B.V. Rathish

    2002-01-01

    In a series of papers of which this is the first we study how to solve elliptic problems on polygonal domains using spectral methods on parallel computers. To overcome the singularities that arise in a neighborhood of the corners we use a geometrical mesh. With this mesh we seek a solution which

  18. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  19. Inverse problems for the Boussinesq system

    International Nuclear Information System (INIS)

    Fan, Jishan; Jiang, Yu; Nakamura, Gen

    2009-01-01

    We obtain two results on inverse problems for a 2D Boussinesq system. One is that we prove the Lipschitz stability for the inverse source problem of identifying a time-independent external force in the system with observation data in an arbitrary sub-domain over a time interval of the velocity and the data of velocity and temperature at a fixed positive time t 0 > 0 over the whole spatial domain. The other one is that we prove a conditional stability estimate for an inverse problem of identifying the two initial conditions with a single observation on a sub-domain

  20. The high burden of cholera in children: comparison of incidence from endemic areas in Asia and Africa.

    Directory of Open Access Journals (Sweden)

    Jacqueline L Deen

    Full Text Available BACKGROUND: Cholera remains an important public health problem. Yet there are few reliable population-based estimates of laboratory-confirmed cholera incidence in endemic areas around the world. METHODS: We established treatment facility-based cholera surveillance in three sites in Jakarta (Indonesia, Kolkata (India, and Beira (Mozambique. The annual incidence of cholera was estimated using the population census as the denominator and the age-specific number of cholera cases among the study cohort as the numerator. FINDINGS: The lowest overall rate was found in Jakarta, where the estimated incidence was 0.5/1000 population/year. The incidence was three times higher in Kolkata (1.6/1000/year and eight times higher in Beira (4.0/1000/year. In all study sites, the greatest burden was in children under 5 years of age. CONCLUSION: There are considerable differences in cholera incidence across these endemic areas but in all sites, children are the most affected. The study site in Africa had the highest cholera incidence consistent with a growing impression of the large cholera burden in Africa. Burden estimates are useful when considering where and among whom interventions such as vaccination would be most needed.

  1. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  2. Estimation of the Mean of a Univariate Normal Distribution When the Variance is not Known

    NARCIS (Netherlands)

    Danilov, D.L.; Magnus, J.R.

    2002-01-01

    We consider the problem of estimating the first k coeffcients in a regression equation with k + 1 variables.For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002).We investigate properties of this estimator in

  3. Estimation of the mean of a univariate normal distribution when the variance is not known

    NARCIS (Netherlands)

    Danilov, Dmitri

    2005-01-01

    We consider the problem of estimating the first k coefficients in a regression equation with k+1 variables. For this problem with known variance of innovations, the neutral Laplace weighted-average least-squares estimator was introduced in Magnus (2002). We generalize this estimator to the case

  4. State estimation of spatio-temporal phenomena

    Science.gov (United States)

    Yu, Dan

    This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input

  5. Identification of the Diffusion Parameter in Nonlocal Steady Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    D’Elia, M., E-mail: mdelia@fsu.edu, E-mail: mdelia@sandia.gov [Sandia National Laboratories (United States); Gunzburger, M. [Florida State University (United States)

    2016-04-15

    The problem of identifying the diffusion parameter appearing in a nonlocal steady diffusion equation is considered. The identification problem is formulated as an optimal control problem having a matching functional as the objective of the control and the parameter function as the control variable. The analysis makes use of a nonlocal vector calculus that allows one to define a variational formulation of the nonlocal problem. In a manner analogous to the local partial differential equations counterpart, we demonstrate, for certain kernel functions, the existence of at least one optimal solution in the space of admissible parameters. We introduce a Galerkin finite element discretization of the optimal control problem and derive a priori error estimates for the approximate state and control variables. Using one-dimensional numerical experiments, we illustrate the theoretical results and show that by using nonlocal models it is possible to estimate non-smooth and discontinuous diffusion parameters.

  6. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  7. Invisibility problem in acoustics, electromagnetism and heat transfer. Inverse design method

    Science.gov (United States)

    Alekseev, G.; Tokhtina, A.; Soboleva, O.

    2017-10-01

    Two approaches (direct design and inverse design methods) for solving problems of designing devices providing invisibility of material bodies of detection using different physical fields - electromagnetic, acoustic and static are discussed. The second method is applied for solving problems of designing cloaking devices for the 3D stationary thermal scattering model. Based on this method the design problems under study are reduced to respective control problems. The material parameters (radial and tangential heat conductivities) of the inhomogeneous anisotropic medium filling the thermal cloak and the density of auxiliary heat sources play the role of controls. A unique solvability of direct thermal scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and numerical algorithms for solving particular thermal cloaking problem are proposed.

  8. Solution to the inversely stated transient source-receptor problem

    International Nuclear Information System (INIS)

    Sajo, E.; Sheff, J.R.

    1995-01-01

    Transient source-receptor problems are traditionally handled via the Boltzmann equation or through one of its variants. In the atmospheric transport of pollutants, meteorological uncertainties in the planetary boundary layer render only a few approximations to the Boltzmann equation useful. Often, due to the high number of unknowns, the atmospheric source-receptor problem is ill-posed. Moreover, models to estimate downwind concentration invariably assume that the source term is known. In this paper, an inverse methodology is developed, based on downwind measurement of concentration and that of meterological parameters to estimate the source term

  9. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  10. Assuring Software Cost Estimates: Is it an Oxymoron?

    Science.gov (United States)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  11. Support Vector Regression-Based Adaptive Divided Difference Filter for Nonlinear State Estimation Problems

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2014-01-01

    Full Text Available We present a support vector regression-based adaptive divided difference filter (SVRADDF algorithm for improving the low state estimation accuracy of nonlinear systems, which are typically affected by large initial estimation errors and imprecise prior knowledge of process and measurement noises. The derivative-free SVRADDF algorithm is significantly simpler to compute than other methods and is implemented using only functional evaluations. The SVRADDF algorithm involves the use of the theoretical and actual covariance of the innovation sequence. Support vector regression (SVR is employed to generate the adaptive factor to tune the noise covariance at each sampling instant when the measurement update step executes, which improves the algorithm’s robustness. The performance of the proposed algorithm is evaluated by estimating states for (i an underwater nonmaneuvering target bearing-only tracking system and (ii maneuvering target bearing-only tracking in an air-traffic control system. The simulation results show that the proposed SVRADDF algorithm exhibits better performance when compared with a traditional DDF algorithm.

  12. An Adaptive Approach to Variational Nodal Diffusion Problems

    International Nuclear Information System (INIS)

    Zhang Hui; Lewis, E.E.

    2001-01-01

    An adaptive grid method is presented for the solution of neutron diffusion problems in two dimensions. The primal hybrid finite elements employed in the variational nodal method are used to reduce the diffusion equation to a coupled set of elemental response matrices. An a posteriori error estimator is developed to indicate the magnitude of local errors stemming from the low-order elemental interface approximations. An iterative procedure is implemented in which p refinement is applied locally by increasing the polynomial order of the interface approximations. The automated algorithm utilizes the a posteriori estimator to achieve local error reductions until an acceptable level of accuracy is reached throughout the problem domain. Application to a series of X-Y benchmark problems indicates the reduction of computational effort achievable by replacing uniform with adaptive refinement of the spatial approximations

  13. Dual and primal mixed Petrov-Galerkin finite element methods in heat transfer problems

    International Nuclear Information System (INIS)

    Loula, A.F.D.; Toledo, E.M.

    1988-12-01

    New mixed finite element formulations for the steady state heat transfer problem are presented with no limitation in the choice of conforming finite element spaces. Adding least square residual forms of the governing equations of the classical Galerkin formulation the original saddle point problem is transformed into a minimization problem. Stability analysis, error estimates and numerical results are presented, confirming the error estimates and the good performance of this new formulation. (author) [pt

  14. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu

    2010-01-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  15. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo

    2010-10-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  16. Linear regression techniques for state-space models with application to biomedical/biochemical example

    NARCIS (Netherlands)

    Khairudin, N.; Keesman, K.J.

    2009-01-01

    In this paper a novel approach to estimate parameters in an LTI continuous-time statespace model is proposed. Essentially, the approach is based on a so-called pqR-decomposition of the numerator and denominator polynomials of the system’s transfer function. This approach allows the physical

  17. Global Optimization of Nonlinear Blend-Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Pedro A. Castillo Castillo

    2017-04-01

    Full Text Available The scheduling of gasoline-blending operations is an important problem in the oil refining industry. This problem not only exhibits the combinatorial nature that is intrinsic to scheduling problems, but also non-convex nonlinear behavior, due to the blending of various materials with different quality properties. In this work, a global optimization algorithm is proposed to solve a previously published continuous-time mixed-integer nonlinear scheduling model for gasoline blending. The model includes blend recipe optimization, the distribution problem, and several important operational features and constraints. The algorithm employs piecewise McCormick relaxation (PMCR and normalized multiparametric disaggregation technique (NMDT to compute estimates of the global optimum. These techniques partition the domain of one of the variables in a bilinear term and generate convex relaxations for each partition. By increasing the number of partitions and reducing the domain of the variables, the algorithm is able to refine the estimates of the global solution. The algorithm is compared to two commercial global solvers and two heuristic methods by solving four examples from the literature. Results show that the proposed global optimization algorithm performs on par with commercial solvers but is not as fast as heuristic approaches.

  18. Dense Descriptors for Optical Flow Estimation: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    2017-02-01

    Full Text Available Estimating the displacements of intensity patterns between sequential frames is a very well-studied problem, which is usually referred to as optical flow estimation. The first assumption among many of the methods in the field is the brightness constancy during movements of pixels between frames. This assumption is proven to be not true in general, and therefore, the use of photometric invariant constraints has been studied in the past. One other solution can be sought by use of structural descriptors rather than pixels for estimating the optical flow. Unlike sparse feature detection/description techniques and since the problem of optical flow estimation tries to find a dense flow field, a dense structural representation of individual pixels and their neighbors is computed and then used for matching and optical flow estimation. Here, a comparative study is carried out by extending the framework of SIFT-flow to include more dense descriptors, and comprehensive comparisons are given. Overall, the work can be considered as a baseline for stimulating more interest in the use of dense descriptors for optical flow estimation.

  19. Is the Rational Addiction model inherently impossible to estimate?

    Science.gov (United States)

    Laporte, Audrey; Dass, Adrian Rohit; Ferguson, Brian S

    2017-07-01

    The Rational Addiction (RA) model is increasingly often estimated using individual level panel data with mixed results; in particular, with regard to the implied rate of time discount. This paper suggests that the odd values of the rate of discount frequently found in the literature may in fact be a consequence of the saddle-point dynamics associated with individual level inter-temporal optimization problems. We report the results of Monte Carlo experiments estimating RA-type difference equations that seem to suggest the possibility that the presence of both a stable and an unstable root in the dynamic process may create serious problems for the estimation of RA equations. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A software for parameter estimation in dynamic models

    Directory of Open Access Journals (Sweden)

    M. Yuceer

    2008-12-01

    Full Text Available A common problem in dynamic systems is to determine parameters in an equation used to represent experimental data. The goal is to determine the values of model parameters that provide the best fit to measured data, generally based on some type of least squares or maximum likelihood criterion. In the most general case, this requires the solution of a nonlinear and frequently non-convex optimization problem. Some of the available software lack in generality, while others do not provide ease of use. A user-interactive parameter estimation software was needed for identifying kinetic parameters. In this work we developed an integration based optimization approach to provide a solution to such problems. For easy implementation of the technique, a parameter estimation software (PARES has been developed in MATLAB environment. When tested with extensive example problems from literature, the suggested approach is proven to provide good agreement between predicted and observed data within relatively less computing time and iterations.

  1. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    Science.gov (United States)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  2. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  3. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  4. From Platform to Partnership

    NARCIS (Netherlands)

    R.J.M. van Tulder (Rob)

    2011-01-01

    textabstractIncreasingly multi-stakeholder processes are being used in response to complex, „tough‟ or „wicked‟ problems such as responding to climate change, hunger or poverty. This development is also denominated as „engaging stakeholders for change‟ (The Broker, blog January 2011). But there is

  5. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  6. Minimax estimation of qubit states with Bures risk

    Science.gov (United States)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  7. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  8. Benchmarking energy use and greenhouse gas emissions in Singapore's hotel industry

    International Nuclear Information System (INIS)

    Wu Xuchao; Priyadarsini, Rajagopalan; Eang, Lee Siew

    2010-01-01

    Hotel buildings are reported in many countries as one of the most energy intensive building sectors. Besides the pressure posed on energy supply, they also have adverse impact on the environment through greenhouse gas emissions, wastewater discharge and so on. This study was intended to shed some light on the energy and environment related issues in hotel industry. Energy consumption data and relevant information collected from hotels were subjected to rigorous statistical analysis. A regression-based benchmarking model was established, which takes into account, the difference in functional and operational features when hotels are compared with regard to their energy performance. In addition, CO 2 emissions from the surveyed hotels were estimated based on a standard procedure for corporate GHG emission accounting. It was found that a hotel's carbon intensity ranking is rather sensitive to the normalizing denominator chosen. Therefore, carbon intensity estimated for the hotels must not be interpreted arbitrarily, and industry specific normalizing denominator should be sought in future studies.

  9. Explaining behavior change after genetic testing: the problem of collinearity between test results and risk estimates.

    Science.gov (United States)

    Fanshawe, Thomas R; Prevost, A Toby; Roberts, J Scott; Green, Robert C; Armstrong, David; Marteau, Theresa M

    2008-09-01

    This paper explores whether and how the behavioral impact of genotype disclosure can be disentangled from the impact of numerical risk estimates generated by genetic tests. Secondary data analyses are presented from a randomized controlled trial of 162 first-degree relatives of Alzheimer's disease (AD) patients. Each participant received a lifetime risk estimate of AD. Control group estimates were based on age, gender, family history, and assumed epsilon4-negative apolipoprotein E (APOE) genotype; intervention group estimates were based upon the first three variables plus true APOE genotype, which was also disclosed. AD-specific self-reported behavior change (diet, exercise, and medication use) was assessed at 12 months. Behavior change was significantly more likely with increasing risk estimates, and also more likely, but not significantly so, in epsilon4-positive intervention group participants (53% changed behavior) than in control group participants (31%). Intervention group participants receiving epsilon4-negative genotype feedback (24% changed behavior) and control group participants had similar rates of behavior change and risk estimates, the latter allowing assessment of the independent effects of genotype disclosure. However, collinearity between risk estimates and epsilon4-positive genotypes, which engender high-risk estimates, prevented assessment of the independent effect of the disclosure of an epsilon4 genotype. Novel study designs are proposed to determine whether genotype disclosure has an impact upon behavior beyond that of numerical risk estimates.

  10. The efficiency of modified jackknife and ridge type regression estimators: a comparison

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2008-09-01

    Full Text Available A common problem in multiple regression models is multicollinearity, which produces undesirable effects on the least squares estimator. To circumvent this problem, two well known estimation procedures are often suggested in the literature. They are Generalized Ridge Regression (GRR estimation suggested by Hoerl and Kennard iteb8 and the Jackknifed Ridge Regression (JRR estimation suggested by Singh et al. iteb13. The GRR estimation leads to a reduction in the sampling variance, whereas, JRR leads to a reduction in the bias. In this paper, we propose a new estimator namely, Modified Jackknife Ridge Regression Estimator (MJR. It is based on the criterion that combines the ideas underlying both the GRR and JRR estimators. We have investigated standard properties of this new estimator. From a simulation study, we find that the new estimator often outperforms the LASSO, and it is superior to both GRR and JRR estimators, using the mean squared error criterion. The conditions under which the MJR estimator is better than the other two competing estimators have been investigated.

  11. The Game of Contacts: Estimating the Social Visibility of Groups.

    Science.gov (United States)

    Salganik, Matthew J; Mello, Maeve B; Abdo, Alexandre H; Bertoni, Neilane; Fazito, Dimitri; Bastos, Francisco I

    2011-01-01

    Estimating the sizes of hard-to-count populations is a challenging and important problem that occurs frequently in social science, public health, and public policy. This problem is particularly pressing in HIV/AIDS research because estimates of the sizes of the most at-risk populations-illicit drug users, men who have sex with men, and sex workers-are needed for designing, evaluating, and funding programs to curb the spread of the disease. A promising new approach in this area is the network scale-up method, which uses information about the personal networks of respondents to make population size estimates. However, if the target population has low social visibility, as is likely to be the case in HIV/AIDS research, scale-up estimates will be too low. In this paper we develop a game-like activity that we call the game of contacts in order to estimate the social visibility of groups, and report results from a study of heavy drug users in Curitiba, Brazil (n = 294). The game produced estimates of social visibility that were consistent with qualitative expectations but of surprising magnitude. Further, a number of checks suggest that the data are high-quality. While motivated by the specific problem of population size estimation, our method could be used by researchers more broadly and adds to long-standing efforts to combine the richness of social network analysis with the power and scale of sample surveys.

  12. Correlation-based decimation in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Higuchi, Saburo; Mezard, Marc

    2010-01-01

    We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.

  13. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  14. Topics on the problem of genetic risk estimates health research foundation

    International Nuclear Information System (INIS)

    Nakai, Sayaka

    1995-01-01

    Reanalysis of the data on untoward pregnancy outcome (UPO) for atomic bomb survivors was undertaken based on following current results of cytogenetic studies obtained in Japan: 1) Human gametes were very sensitive to the production of chromosome aberrations either spontaneous or radiation induced origin. 2) The shape of dose-response relations against to radiations showed humped curve at relatively low dose-range below 3Gy. 3) Existence of very severe selection to the embryo having chromosome aberrations represented during fetus development before the birth. It was concluded that 1) Humped dose-response model was more fitted than the linear dose model. 2) Regression coefficient for the slope of UPO at low doses derived from humped dose model was about 6 times more higher than the previous value based on linear model. 3) Risk factor for genetic detriment in term of UPO was estimated as 0.015/Gy under the condition exposed radiation below 1Gy. 4) It was difficult to find out positive evidence supporting the view which is given by Neel et al. that present estimates of doubling dose based on mouse data thought to be underestimated figure. (author)

  15. A comparative study of some robust ridge and liu estimators ...

    African Journals Online (AJOL)

    In multiple linear regression analysis, multicollinearity and outliers are two main problems. When multicollinearity exists, biased estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S ...

  16. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  17. Risk of Injury in Basketball, Football, and Soccer Players, Ages 15 Years and Older, 2003–2007

    Science.gov (United States)

    Carter, Elizabeth A.; Westerman, Beverly J.; Hunting, Katherine L.

    2011-01-01

    Context: A major challenge in the field of sports injury epidemiology is identifying the appropriate denominators for injury rates. Objective: To characterize risk of injury from participation in basketball, football, and soccer in the United States, using hours of participation as the measure of exposure, and to compare these rates with those derived using population estimates in the denominator. Design: Descriptive epidemiology study. Setting: United States, 2003–2007. Participants: People ages 15 years and older who experienced an emergency department–treated injury while playing basketball, football, or soccer. Main Outcome Measure(s): Rates of emergency department–treated injuries resulting from participation in basketball, football, or soccer. Injury rates were calculated for people ages 15 and older for the years 2003–2007 using the U.S. population and hours of participation as the denominators. The risk of injury associated with each of these sports was compared for all participants and by sex. Results: From 2003 through 2007, annual injury rates per 1000 U.S. population were as follows: 1.49 (95% confidence interval [CI] = 1.30, 1.67) in basketball, 0.93 (95% CI = 0.82, 1.04) in football, and 0.43 (95% CI = 0.33, 0.53) in soccer. When the denominator was hours of participation, the injury rate in football (5.08 [95% CI = 4.46, 5.69]/10 000 hours) was almost twice as high as that for basketball (2.69 [95% CI = 2.35, 3.02]/10 000 hours) and soccer (2.69 [95% CI = 2.07, 3.30]/10 000 hours). Conclusions: Depending on the choice of denominator, interpretation of the risk of an emergency department–treated injury in basketball, football, or soccer varies greatly. Using the U.S. population as the denominator produced rates that were highest in basketball and lowest in soccer. However, using hours of participation as a more accurate measure of exposure demonstrated that football had a higher rate of injury than basketball or soccer for both males and

  18. Risk of injury in basketball, football, and soccer players, ages 15 years and older, 2003-2007.

    Science.gov (United States)

    Carter, Elizabeth A; Westerman, Beverly J; Hunting, Katherine L

    2011-01-01

    A major challenge in the field of sports injury epidemiology is identifying the appropriate denominators for injury rates. To characterize risk of injury from participation in basketball, football, and soccer in the United States, using hours of participation as the measure of exposure, and to compare these rates with those derived using population estimates in the denominator. Descriptive epidemiology study. United States, 2003-2007. People ages 15 years and older who experienced an emergency department-treated injury while playing basketball, football, or soccer. Rates of emergency department-treated injuries resulting from participation in basketball, football, or soccer. Injury rates were calculated for people ages 15 and older for the years 2003-2007 using the U.S. population and hours of participation as the denominators. The risk of injury associated with each of these sports was compared for all participants and by sex. From 2003 through 2007, annual injury rates per 1000 U.S. population were as follows: 1.49 (95% confidence interval [CI] = 1.30, 1.67) in basketball, 0.93 (95% CI = 0.82, 1.04) in football, and 0.43 (95% CI = 0.33, 0.53) in soccer. When the denominator was hours of participation, the injury rate in football (5.08 [95% CI = 4.46, 5.69]/10 000 hours) was almost twice as high as that for basketball (2.69 [95% CI = 2.35, 3.02]/10 000 hours) and soccer (2.69 [95% CI = 2.07, 3.30]/10 000 hours). Depending on the choice of denominator, interpretation of the risk of an emergency department-treated injury in basketball, football, or soccer varies greatly. Using the U.S. population as the denominator produced rates that were highest in basketball and lowest in soccer. However, using hours of participation as a more accurate measure of exposure demonstrated that football had a higher rate of injury than basketball or soccer for both males and females.

  19. Data Reduction with Quantization Constraints for Decentralized Estimation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Weng

    2014-01-01

    Full Text Available The unknown vector estimation problem with bandwidth constrained wireless sensor network is considered. In such networks, sensor nodes make distributed observations on the unknown vector and collaborate with a fusion center to generate a final estimate. Due to power and communication bandwidth limitations, each sensor node must compress its data and transmit to the fusion center. In this paper, both centralized and decentralized estimation frameworks are developed. The closed-form solution for the centralized estimation framework is proposed. The computational complexity of decentralized estimation problem is proven to be NP-hard and a Gauss-Seidel algorithm to search for an optimal solution is also proposed. Simulation results show the good performance of the proposed algorithms.

  20. Toward a mathematical theory of environmental monitoring: the infrequent sampling problem

    International Nuclear Information System (INIS)

    Pimentel, K.D.

    1975-06-01

    Optimal monitoring of pollutants in diffusive environmental media was studied in the contexts of the subproblems of the optimal design and management of environmental monitors for bounds on maximum allowable errors in the estimate of the monitor state or output variables. Concise problem statements were made. Continuous-time finite-dimensional normal mode models for distributed stochastic diffusive pollutant transport were developed. The resultant set of state equations was discretized in time for implementation in the Kalman Filter in the problem of optimal state estimation. The main results of this thesis concern the special class of optimal monitoring problem called the infrequent sampling problem. Extensions to systems including pollutant scavenging and systems with emission or radiation boundary conditions were made. (U.S.)

  1. A predictor-corrector algorithm to estimate the fractional flow in oil-water models

    International Nuclear Information System (INIS)

    Savioli, Gabriela B; Berdaguer, Elena M Fernandez

    2008-01-01

    We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.

  2. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  3. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  4. The complexity of computing the MCD-estimator

    DEFF Research Database (Denmark)

    Bernholt, T.; Fischer, Paul

    2004-01-01

    In modem statistics the robust estimation of parameters is a central problem, i.e., an estimation that is not or only slightly affected by outliers in the data. The minimum covariance determinant (MCD) estimator (J. Amer. Statist. Assoc. 79 (1984) 871) is probably one of the most important robust...... estimators of location and scatter. The complexity of computing the MCD, however, was unknown and generally thought to be exponential even if the dimensionality of the data is fixed. Here we present a polynomial time algorithm for MCD for fixed dimension of the data. In contrast we show that computing...... the MCD-estimator is NP-hard if the dimension varies. (C) 2004 Elsevier B.V. All rights reserved....

  5. Initial and final estimates of the Bilinear seasonal time series model ...

    African Journals Online (AJOL)

    In getting the estimates of the parameters of this model special attention was paid to the problem of having good initial estimates as it is proposed that with good initial values of the parameters the estimates obtaining by the Newton-Raphson iterative technique usually not only converge but also are good estimates.

  6. Fiber Orientation Estimation Guided by a Deep Network.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2017-09-01

    Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted ℓ 1 -norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.

  7. Estimating costs in the economic evaluation of medical technologies.

    Science.gov (United States)

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  8. Asteroid mass estimation using Markov-chain Monte Carlo

    Science.gov (United States)

    Siltala, Lauri; Granvik, Mikael

    2017-11-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.

  9. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    International Nuclear Information System (INIS)

    Lazzús, Juan A.; Rivera, Marco; López-Caraballo, Carlos H.

    2016-01-01

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

  10. Parameter estimation of Lorenz chaotic system using a hybrid swarm intelligence algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lazzús, Juan A., E-mail: jlazzus@dfuls.cl; Rivera, Marco; López-Caraballo, Carlos H.

    2016-03-11

    A novel hybrid swarm intelligence algorithm for chaotic system parameter estimation is present. For this purpose, the parameters estimation on Lorenz systems is formulated as a multidimensional problem, and a hybrid approach based on particle swarm optimization with ant colony optimization (PSO–ACO) is implemented to solve this problem. Firstly, the performance of the proposed PSO–ACO algorithm is tested on a set of three representative benchmark functions, and the impact of the parameter settings on PSO–ACO efficiency is studied. Secondly, the parameter estimation is converted into an optimization problem on a three-dimensional Lorenz system. Numerical simulations on Lorenz model and comparisons with results obtained by other algorithms showed that PSO–ACO is a very powerful tool for parameter estimation with high accuracy and low deviations. - Highlights: • PSO–ACO combined particle swarm optimization with ant colony optimization. • This study is the first research of PSO–ACO to estimate parameters of chaotic systems. • PSO–ACO algorithm can identify the parameters of the three-dimensional Lorenz system with low deviations. • PSO–ACO is a very powerful tool for the parameter estimation on other chaotic system.

  11. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  12. Global existence and decay of solutions of the Cauchy problem in thermoelasticity with second sound

    KAUST Repository

    Kasimov, Aslan R.; Racke, Reinhard; Said-Houari, Belkacem

    2013-01-01

    We consider the one-dimensional Cauchy problem in non-linear thermoelasticity with second sound, where the heat conduction is modelled by Cattaneo's law. After presenting decay estimates for solutions to the linearized problem, including refined estimates for data in weighted Lebesgue-spaces, we prove a global existence theorem for small data together with improved decay estimates, in particular for derivatives of the solutions. © 2013 Taylor & Francis.

  13. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  14. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  15. Nonclassical Problem for Ultraparabolic Equation in Abstract Spaces

    Directory of Open Access Journals (Sweden)

    Gia Avalishvili

    2016-01-01

    Full Text Available Nonclassical problem for ultraparabolic equation with nonlocal initial condition with respect to one time variable is studied in abstract Hilbert spaces. We define the space of square integrable vector-functions with values in Hilbert spaces corresponding to the variational formulation of the nonlocal problem for ultraparabolic equation and prove trace theorem, which allows one to interpret initial conditions of the nonlocal problem. We obtain suitable a priori estimates and prove the existence and uniqueness of solution of the nonclassical problem and continuous dependence upon the data of the solution to the nonlocal problem. We consider an application of the obtained abstract results to nonlocal problem for ultraparabolic partial differential equation with second-order elliptic operator and obtain well-posedness result in Sobolev spaces.

  16. Marginal estimator for the aberrations of a space telescope by phase diversity

    Science.gov (United States)

    Blanc, Amandine; Mugnier, Laurent; Idier, Jérôme

    2017-11-01

    In this communication, we propose a novel method for estimating the aberrations of a space telescope from phase diversity data. The images recorded by such a telescope can be degraded by optical aberrations due to design, fabrication or misalignments. Phase diversity is a technique that allows the estimation of aberrations. The only estimator found in the relevant literature is based on a joint estimation of the aberrated phase and the observed object. We recall this approach and study the behavior of this joint estimator by means of simulations. We propose a novel marginal estimator of the sole phase. it is obtained by integrating the observed object out of the problem; indeed, this object is a nuisance parameter in our problem. This reduces drastically the number of unknown and provides better asymptotic properties. This estimator is implemented and its properties are validated by simulation. its performance is equal or even better than that of the joint estimator for the same computing cost.

  17. Estimation of electricity demand of Iran using two heuristic algorithms

    International Nuclear Information System (INIS)

    Amjadi, M.H.; Nezamabadi-pour, H.; Farsangi, M.M.

    2010-01-01

    This paper deals with estimation of electricity demand of Iran based on economic indicators using Particle Swarm Optimization (PSO) Algorithm. The estimation is based on Gross Domestic Product (GDP), population, number of customers and average price electricity by developing two different estimation models: a linear model and a non-linear model. The proposed models are obtained based upon available actual data of 21 years; since 1980-2000. Then the models obtained are used to estimate the electricity demand of the target years; for a period of time e.g. 2001-2006 and the results obtained are compared with the actual demand during this period. Furthermore, to validate the results obtained by PSO, genetic algorithm (GA) is applied to solve the problem. The results show that the PSO is a useful optimization tool for solving the problem using two developed models and can be used as an alternative solution to estimate the future electricity demand.

  18. Reducing Inventory System Costs by Using Robust Demand Estimators

    OpenAIRE

    Raymond A. Jacobs; Harvey M. Wagner

    1989-01-01

    Applications of inventory theory typically use historical data to estimate demand distribution parameters. Imprecise knowledge of the demand distribution adds to the usual replenishment costs associated with stochastic demands. Only limited research has been directed at the problem of choosing cost effective statistical procedures for estimating these parameters. Available theoretical findings on estimating the demand parameters for (s, S) inventory replenishment policies are limited by their...

  19. Variational multi-valued velocity field estimation for transparent sequences

    DEFF Research Database (Denmark)

    Ramírez-Manzanares, Alonso; Rivera, Mariano; Kornprobst, Pierre

    2011-01-01

    Motion estimation in sequences with transparencies is an important problem in robotics and medical imaging applications. In this work we propose a variational approach for estimating multi-valued velocity fields in transparent sequences. Starting from existing local motion estimators, we derive...... a variational model for integrating in space and time such a local information in order to obtain a robust estimation of the multi-valued velocity field. With this approach, we can indeed estimate multi-valued velocity fields which are not necessarily piecewise constant on a layer –each layer can evolve...

  20. The Numerical Solution of the Equilibrium Problem for a Stretchable Elastic Beam

    Science.gov (United States)

    Mehdiyeva, G. Y.; Aliyev, A. Y.

    2017-08-01

    The boundary value problem under consideration describes the equilibrium of an elastic beam that is stretched or contracted by specified forces. The left end of the beam is free of load, and the right end is rigidly lapped. To solve the problem numerically, an appropriate difference problem is constructed. Solving the difference problem, we obtain an approximate solution of the problem. We estimate the approximate solution of the stated problem.

  1. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  2. ADHD and math - The differential effect on calculation and estimation.

    Science.gov (United States)

    Ganor-Stern, Dana; Steinhorn, Ofir

    2018-05-31

    Adults with ADHD were compared to controls when solving multiplication problems exactly and when estimating the results of multidigit multiplication problems relative to reference numbers. The ADHD participants were slower than controls in the exact calculation and in the estimation tasks, but not less accurate. The ADHD participants were similar to controls in showing enhanced accuracy and speed for smaller problem sizes, for trials in which the reference numbers were smaller (vs. larger) than the exact answers and for reference numbers that were far (vs. close) from the exact answer. The two groups similarly used the approximated calculation and the sense of magnitude strategies. They differed however in strategy execution, mainly of the approximated calculation strategy, which requires working memory resources. The increase in reaction time associated with using the approximated calculation strategy was larger for the ADHD compared to the control participants. Thus, ADHD seems to selectively impair calculation processes in estimation tasks that rely on working memory, but it does not hamper estimation skills that are based on sense of magnitude. The educational implications of these findings are discussed. Copyright © 2018. Published by Elsevier B.V.

  3. Study on Posture Estimation Using Delayed Measurements for Mobile Robots

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    When associating data from various sensors to estimate the posture of mobile robots, a crucial problem to be solved is that there may be some delayed measurements. Furthermore, the general multi-sensor data fusion algorithm is a Kalman filter. In order to handle the problem concerning delayed measurements, this paper investigates a Kalman filter modified to account for the delays. Based on the interpolating measurement, a fusion system is applied to estimate the posture of a mobile robot which fuses the data from the encoder and laser global position system using the extended Kalman filter algorithm. Finally, the posture estimation experiment of the mobile robot is given whose result verifies the feasibility and efficiency of the algorithm.

  4. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

    1989-01-01

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  5. Adaptive nonparametric estimation for L\\'evy processes observed at low frequency

    OpenAIRE

    Kappus, Johanna

    2013-01-01

    This article deals with adaptive nonparametric estimation for L\\'evy processes observed at low frequency. For general linear functionals of the L\\'evy measure, we construct kernel estimators, provide upper risk bounds and derive rates of convergence under regularity assumptions. Our focus lies on the adaptive choice of the bandwidth, using model selection techniques. We face here a non-standard problem of model selection with unknown variance. A new approach towards this problem is proposed, ...

  6. Young Christians in Norway, national socialism, and the German ...

    African Journals Online (AJOL)

    The German occupation of Norway during the Second World War caused unprecedented problems for the Evangelical Lutheran Church of Norway and other Christian denominations. The subordination of the church to the de facto Nazi state eventually led its bishops and most of its pastors to sever their ties to the ...

  7. Prevalence and detection of psychosocial problems in cancer genetic counseling

    NARCIS (Netherlands)

    Eijzenga, W.; Bleiker, E.M.A.; Hahn, D.E.E.; van der Kolk, L.E.; Sidharta, G.N.; Aaronson, N.K.

    2015-01-01

    Only a minority of individuals who undergo cancer genetic counseling experience heightened levels of psychological distress, but many more experience a range of cancer genetic-specific psychosocial problems. The aim of this study was to estimate the prevalence of such psychosocial problems, and to

  8. Direction-of-Arrival Estimation with Coarray ESPRIT for Coprime Array.

    Science.gov (United States)

    Zhou, Chengwei; Zhou, Jinfang

    2017-08-03

    A coprime array is capable of achieving more degrees-of-freedom for direction-of-arrival (DOA) estimation than a uniform linear array when utilizing the same number of sensors. However, existing algorithms exploiting coprime array usually adopt predefined spatial sampling grids for optimization problem design or include spectrum peak search process for DOA estimation, resulting in the contradiction between estimation performance and computational complexity. To address this problem, we introduce the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) to the coprime coarray domain, and propose a novel coarray ESPRIT-based DOA estimation algorithm to efficiently retrieve the off-grid DOAs. Specifically, the coprime coarray statistics are derived according to the received signals from a coprime array to ensure the degrees-of-freedom (DOF) superiority, where a pair of shift invariant uniform linear subarrays is extracted. The rotational invariance of the signal subspaces corresponding to the underlying subarrays is then investigated based on the coprime coarray covariance matrix, and the incorporation of ESPRIT in the coarray domain makes it feasible to formulate the closed-form solution for DOA estimation. Theoretical analyses and simulation results verify the efficiency and the effectiveness of the proposed DOA estimation algorithm.

  9. Global existence and decay of solutions of the Cauchy problem in thermoelasticity with second sound

    KAUST Repository

    Kasimov, Aslan R.

    2013-06-04

    We consider the one-dimensional Cauchy problem in non-linear thermoelasticity with second sound, where the heat conduction is modelled by Cattaneo\\'s law. After presenting decay estimates for solutions to the linearized problem, including refined estimates for data in weighted Lebesgue-spaces, we prove a global existence theorem for small data together with improved decay estimates, in particular for derivatives of the solutions. © 2013 Taylor & Francis.

  10. Inverse radiative transfer problems in two-dimensional heterogeneous media

    International Nuclear Information System (INIS)

    Tito, Mariella Janette Berrocal

    2001-01-01

    The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)

  11. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  12. Steroid radioimmunoassays. The problems of blanks

    Energy Technology Data Exchange (ETDEWEB)

    Shinde, Y; Hacker, R R; Ntunde, B; Smith, V G [Guelph Univ., Ontario (Canada). Dept. of Animal and Poultry Science

    1981-06-01

    An estrogen radioimmunoassay was used to study the problem of blanks in steroid assays. Negligible binding (1.5 percent) in the non-antibody tubes prevailed throughout the study. The assay was validated using accepted procedures. Both water and solvent blanks had estrogen concentrations of 7-9 pg/tube. However, neither water nor solvent blanks showed a dose-related response, indicating that they were 'real' blanks. Exogenous estradiol, when added to water and solvent in quantities less than the estimated blank, was not quantitatively recovered. However, exogenous estradiol added to the water solvent in quantities greater than the blank estimate was quantitatively recovered. The sensitivity of the reference standard curve was 6-10 pg/tube, approximately the same as the blank estimate. These results indicated that the estimates of water and solvent blanks were measures of the assay sensitivity. In such circumstances, it is suggested that blank estimates should not be subtracted from sample values. If the blank estimates are high, attention should be directed towards improving the sensitivity of the assay.

  13. The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators

    Science.gov (United States)

    Ahmedov, Anvarjon

    2018-03-01

    In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral

  14. Quasihomogeneous function method and Fock's problem

    International Nuclear Information System (INIS)

    Smyshlyaev, V.P.

    1987-01-01

    The diffraction of a high-frequency wave by a smooth convex body near the tangency point of the limiting ray to the surface is restated as the scattering problem for the Schrodinger equation with a linear potential on a half-axis. Various prior estimates for the scattering problem are used in order to prove existence, uniqueness, and smoothness theorems. The corresponding solution satisfies the principle of limiting absorption. The formal solution of the corresponding Schrodinger equation in the form of quasihomogeneous functions is essentially used in their constructions

  15. La Lectoescritura Y Su Incidencia En El Desarrollo De Habilidades Y Destrezas Cognoscitivas En Los Estudiantes De La Escuela “Zoila Ugarte De Landívar”, Recinto La Carmela, Cantón Baba, Provincia De Los Ríos.

    OpenAIRE

    Arcentales Vinces Odalia Francisca

    2015-01-01

    This research is denomination in developing literacy skills and cognitive skills in students "Zoila Ugarte de Landivar" School, with the element of language as the main vehicle by which thought is transmitted, and allows satisfy the human need to communicate with others. Among the problems observed is that they have not successfully completed the integration of visual, auditory and motor functions, for which the problem has been determined indicating how affects reading and writing skills...

  16. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  17. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  18. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  19. Multi-pitch Estimation using Semidefinite Programming

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Vandenberghe, Lieven

    2017-01-01

    assuming a Nyquist sampled signal by adding an additional semidefinite constraint. We show that the proposed estimator has superior performance compared to state- of-the-art methods for separating two closely spaced fundamentals and approximately achieves the asymptotic Cramér-Rao lower bound.......Multi-pitch estimation concerns the problem of estimating the fundamental frequencies (pitches) and amplitudes/phases of multiple superimposed harmonic signals with application in music, speech, vibration analysis etc. In this paper we formulate a complex-valued multi-pitch estimator via...... a semidefinite programming representation of an atomic decomposition over a continuous dictionary of complex exponentials and extend this to real-valued data via a real semidefinite pro-ram with the same dimensions (i.e. half the size). We further impose a continuous frequency constraint naturally occurring from...

  20. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    International Nuclear Information System (INIS)

    Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.

    2008-01-01

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study

  1. Propagation of uncertainties in problems of structural reliability

    International Nuclear Information System (INIS)

    Mazumdar, M.; Marshall, J.A.; Chay, S.C.

    1978-01-01

    The problem of controlling a variable Y such that the probability of its exceeding a specified design limit L is very small, is treated. This variable is related to a set of random variables Xsub(i) by means of a known function Y=f(Xsub(i)). The following approximate methods are considered for estimating the propagation of error in the Xsub(i)'s through the function f(-): linearization; method of moments; Monte Carlo methods; numerical integration. Response surface and associated design of experiments problems as well as statistical inference problems are discussed. (Auth.)

  2. Determination of scaling factors to estimate the radionuclide inventory in waste with low and intermediate-level activity from the IEA-R1 reactor

    International Nuclear Information System (INIS)

    Taddei, Maria Helena Tirollo

    2013-01-01

    Regulations regarding transfer and final disposal of radioactive waste require that the inventory of radionuclides for each container enclosing such waste must be estimated and declared. The regulatory limits are established as a function of the annual radiation doses that members of the public could be exposed to from the radioactive waste repository, which mainly depend on the activity concentration of radionuclides, given in Bq/g, found in each waste container. Most of the radionuclides that emit gamma-rays can have their activity concentrations determined straightforwardly by measurements carried out externally to the containers. However, radionuclides that emit exclusively alpha or beta particles, as well as gamma-rays or X-rays with low energy and low absolute emission intensity, or whose activity is very low among the radioactive waste, are generically designated as Difficult to Measure Nuclides (DTMs). The activity concentrations of these DTMs are determined by means of complex radiochemical procedures that involve isolating the chemical species being studied from the interference in the waste matrix. Moreover, samples must be collected from each container in order to perform the analyses inherent to the radiochemical procedures, which exposes operators to high levels of radiation and is very costly because of the large number of radioactive waste containers that need to be characterized at a nuclear facility. An alternative methodology to approach this problem consists in obtaining empirical correlations between some radionuclides that can be measured directly – such as 60 Co and 137 Cs, therefore designated as Key Nuclides (KNs) – and the DTMs. This methodology, denominated Scaling Factor, was applied in the scope of the present work in order to obtain Scaling Factors or Correlation Functions for the most important radioactive wastes with low and intermediate-activity level from the IEA-R1 nuclear research reactor. (author)

  3. Beyond Self-Report: Tools to Compare Estimated and Real-World Smartphone Use

    Science.gov (United States)

    Andrews, Sally; Ellis, David A.; Shaw, Heather; Piwek, Lukasz

    2015-01-01

    Psychologists typically rely on self-report data when quantifying mobile phone usage, despite little evidence of its validity. In this paper we explore the accuracy of using self-reported estimates when compared with actual smartphone use. We also include source code to process and visualise these data. We compared 23 participants’ actual smartphone use over a two-week period with self-reported estimates and the Mobile Phone Problem Use Scale. Our results indicate that estimated time spent using a smartphone may be an adequate measure of use, unless a greater resolution of data are required. Estimates concerning the number of times an individual used their phone across a typical day did not correlate with actual smartphone use. Neither estimated duration nor number of uses correlated with the Mobile Phone Problem Use Scale. We conclude that estimated smartphone use should be interpreted with caution in psychological research. PMID:26509895

  4. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  5. Combining four Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Urbatsch, Todd J.

    2011-01-01

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the FOM of the combined estimator is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10 - 20% greater than any of the solo estimators' FOM. The numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions. (author)

  6. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  7. On the problems of PPS sampling in multi-character surveys ...

    African Journals Online (AJOL)

    This paper, which is on the problems of PPS sampling in multi-character surveys, compares the efficiency of some estimators used in PPSWR sampling for multiple characteristics. From a superpopulation model, we computed the expected variances of the different estimators for each of the first two finite populations ...

  8. Rigorous home range estimation with movement data: a new autocorrelated kernel density estimator.

    Science.gov (United States)

    Fleming, C H; Fagan, W F; Mueller, T; Olson, K A; Leimgruber, P; Calabrese, J M

    2015-05-01

    Quantifying animals' home ranges is a key problem in ecology and has important conservation and wildlife management applications. Kernel density estimation (KDE) is a workhorse technique for range delineation problems that is both statistically efficient and nonparametric. KDE assumes that the data are independent and identically distributed (IID). However, animal tracking data, which are routinely used as inputs to KDEs, are inherently autocorrelated and violate this key assumption. As we demonstrate, using realistically autocorrelated data in conventional KDEs results in grossly underestimated home ranges. We further show that the performance of conventional KDEs actually degrades as data quality improves, because autocorrelation strength increases as movement paths become more finely resolved. To remedy these flaws with the traditional KDE method, we derive an autocorrelated KDE (AKDE) from first principles to use autocorrelated data, making it perfectly suited for movement data sets. We illustrate the vastly improved performance of AKDE using analytical arguments, relocation data from Mongolian gazelles, and simulations based upon the gazelle's observed movement process. By yielding better minimum area estimates for threatened wildlife populations, we believe that future widespread use of AKDE will have significant impact on ecology and conservation biology.

  9. Recommendations for the tuning of rare event probability estimators

    International Nuclear Information System (INIS)

    Balesdent, Mathieu; Morio, Jérôme; Marzat, Julien

    2015-01-01

    Being able to accurately estimate rare event probabilities is a challenging issue in order to improve the reliability of complex systems. Several powerful methods such as importance sampling, importance splitting or extreme value theory have been proposed in order to reduce the computational cost and to improve the accuracy of extreme probability estimation. However, the performance of these methods is highly correlated with the choice of tuning parameters, which are very difficult to determine. In order to highlight recommended tunings for such methods, an empirical campaign of automatic tuning on a set of representative test cases is conducted for splitting methods. It allows to provide a reduced set of tuning parameters that may lead to the reliable estimation of rare event probability for various problems. The relevance of the obtained result is assessed on a series of real-world aerospace problems

  10. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  11. Comparative demography of an epiphytic lichen: support for general life history patterns and solutions to common problems in demographic parameter estimation.

    Science.gov (United States)

    Shriver, Robert K; Cutler, Kerry; Doak, Daniel F

    2012-09-01

    Lichens are major components in many terrestrial ecosystems, yet their population ecology is at best only poorly understood. Few studies have fully quantified the life history or demographic patterns of any lichen, with particularly little attention to epiphytic species. We conducted a 6-year demographic study of Vulpicida pinastri, an epiphytic foliose lichen, in south-central Alaska. After testing multiple size-structured functions to describe patterns in each V. pinastri demographic rate, we used the resulting estimates to construct a stochastic demographic model for the species. This model development led us to propose solutions to two general problems in construction of demographic models for many taxa: how to simply but accurately characterize highly skewed growth rates, and how to estimate recruitment rates that are exceptionally difficult to directly observe. Our results show that V. pinastri has rapid and variable growth and, for small individuals, low and variable survival, but that these traits are coupled with considerable longevity (e.g., >50 years mean future life span for a 4-cm(2) thallus) and little deviation of the stochastic population growth rate from the deterministic expectation. Comparisons of the demographic patterns we found with those of other lichen studies suggest that their relatively simple architecture may allow clearer generalities about growth patterns for lichens than for other taxa, and that the expected pattern of faster growth rates for epiphytic species is substantiated.

  12. On the generalization of linear least mean squares estimation to quantum systems with non-commutative outputs

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)

    2015-12-15

    The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)

  13. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  14. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  15. Agnostic Estimation of Mean and Covariance

    OpenAIRE

    Lai, Kevin A.; Rao, Anup B.; Vempala, Santosh

    2016-01-01

    We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\\mathbb{R}^n$, in the presence of an $\\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\\eta$ fraction of data is adversarially corrupted, agn...

  16. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  17. Recursive Monte Carlo method for deep-penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.; Greenspan, E.

    1980-01-01

    The Recursive Monte Carlo (RMC) method developed for estimating importance function distributions in deep-penetration problems is described. Unique features of the method, including the ability to infer the importance function distribution pertaining to many detectors from, essentially, a single M.C. run and the ability to use the history tape created for a representative region to calculate the importance function in identical regions, are illustrated. The RMC method is applied to the solution of two realistic deep-penetration problems - a concrete shield problem and a Tokamak major penetration problem. It is found that the RMC method can provide the importance function distributions, required for importance sampling, with accuracy that is suitable for an efficient solution of the deep-penetration problems considered. The use of the RMC method improved, by one to three orders of magnitude, the solution efficiency of the two deep-penetration problems considered: a concrete shield problem and a Tokamak major penetration problem. 8 figures, 4 tables

  18. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  19. Models of resource allocation optimization when solving the control problems in organizational systems

    Science.gov (United States)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  20. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  1. A modified procedure for estimating the population mean in two ...

    African Journals Online (AJOL)

    A modified procedure for estimating the population mean in two-occasion successive samplings. Housila Prasad Singh, Suryal Kant Pal. Abstract. This paper addresses the problem of estimating the current population mean in two occasion successive sampling. Utilizing the readily available information on two auxiliary ...

  2. On Estimation of the CES Production Function - Revisited

    DEFF Research Database (Denmark)

    Henningsen, Arne; Henningsen, Geraldine

    2012-01-01

    Estimation of the non-linear Constant Elasticity of Scale (CES) function is generally considered problematic due to convergence problems and unstable and/or meaningless results. These problems often arise from a non-smooth objective function with large flat areas, the discontinuity of the CES...... function where the elasticity of substitution is one, and possibly significant rounding errors where the elasticity of substitution is close to one. We suggest three (combinable) solutions that alleviate these problems and improve the reliability and stability of the results....

  3. Birthday and birthmate problems: misconceptions of probability among psychology undergraduates and casino visitors and personnel.

    Science.gov (United States)

    Voracek, Martin; Tran, Ulrich S; Formann, Anton K

    2008-02-01

    Subjective estimates and associated confidence ratings for the solutions of some classic occupancy problems were studied in samples of 721 psychology undergraduates, 39 casino visitors, and 34 casino employees. On tasks varying the classic birthday problem, i.e., the probability P for any coincidence among N individuals sharing the same birthday, clear majorities of respondents markedly overestimated N, given P, and markedly underestimated P, given N. Respondents did notedly better on tasks varying the birthmate problem, i.e., P for the specific coincidence among N individuals of having a birthday today. Psychology students and women did better on both task types, but were less confident about their estimates than casino visitors or per sonnel and men. Several further person variables, such as indicators of topical knowledge and familiarity, were associated with better and more confident performance on birthday problems, but not on birthmate problems. Likewise, higher confidence ratings were related to subjective estimates that were closer to the solutions of birthday problems, but not of birthmate problems. Implications of and possible explanations for these findings, study limitations, directions for further inquiry, and the real-world relevance of ameliorating misconceptions of probability are discussed.

  4. Positive solutions for a fourth order boundary value problem

    Directory of Open Access Journals (Sweden)

    Bo Yang

    2005-02-01

    Full Text Available We consider a boundary value problem for the beam equation, in which the boundary conditions mean that the beam is embedded at one end and free at the other end. Some new estimates to the positive solutions to the boundary value problem are obtained. Some sufficient conditions for the existence of at least one positive solution for the boundary value problem are established. An example is given at the end of the paper to illustrate the main results.

  5. A note on the sensitivity of the strategic asset allocation problem

    Directory of Open Access Journals (Sweden)

    W.J. Hurley

    2015-12-01

    Full Text Available The Markowitz mean–variance portfolio optimization problem is a quadratic programming problem whose first-order conditions require the solution of a linear system. It is well known that the optimal portfolio weights are sensitive to parameter estimates, particularly the mean return vector. This has generally been attributed to the interaction of estimation error and optimization. In this paper we present some examples that suggest the linear system produced by the first-order conditions is ill-conditioned and it is this property that gives rise to the sensitivity of the optimal weights.

  6. A Balanced Approach to Adaptive Probability Density Estimation

    Directory of Open Access Journals (Sweden)

    Julio A. Kovacs

    2017-04-01

    Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  7. Child labor and childhood behavioral and mental health problems in ...

    African Journals Online (AJOL)

    Objective: The objectives of this study are to estimate the prevalence and describe the nature of behavioral and mental health problems, as well as child abuse, nutritional problems, gross physical illness and injury among child laborers aged 8 to 15 years in Ethiopia. However, only the behavioral and mental health ...

  8. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    Science.gov (United States)

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  9. Practical adjoint Monte Carlo technique for fixed-source and eigenfunction neutron transport problems

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1981-01-01

    An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs

  10. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  11. Biased sampling, over-identified parameter problems and beyond

    CERN Document Server

    Qin, Jing

    2017-01-01

    This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others. .

  12. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  13. Algebraic solution of the synthesis problem for coded sequences

    International Nuclear Information System (INIS)

    Leukhin, Anatolii N

    2005-01-01

    The algebraic solution of a 'complex' problem of synthesis of phase-coded (PC) sequences with the zero level of side lobes of the cyclic autocorrelation function (ACF) is proposed. It is shown that the solution of the synthesis problem is connected with the existence of difference sets for a given code dimension. The problem of estimating the number of possible code combinations for a given code dimension is solved. It is pointed out that the problem of synthesis of PC sequences is related to the fundamental problems of discrete mathematics and, first of all, to a number of combinatorial problems, which can be solved, as the number factorisation problem, by algebraic methods by using the theory of Galois fields and groups. (fourth seminar to the memory of d.n. klyshko)

  14. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  15. A Schwarz alternating procedure for singular perturbation problems

    Energy Technology Data Exchange (ETDEWEB)

    Garbey, M. [Universit Claude Bernard Lyon, Villeurbanne (France); Kaper, H.G. [Argonne National Lab., IL (United States)

    1994-12-31

    The authors show that the Schwarz alternating procedure offers a good algorithm for the numerical solution of singular perturbation problems, provided the domain decomposition is properly designed to resolve the boundary and transition layers. They give sharp estimates for the optimal position of the domain boundaries and present convergence rates of the algorithm for various second-order singular perturbation problems. The splitting of the operator is domain-dependent, and the iterative solution of each subproblem is based on a modified asymptotic expansion of the operator. They show that this asymptotic-induced method leads to a family of efficient massively parallel algorithms and report on implementation results for a turning-point problem and a combustion problem.

  16. Variance estimation in the analysis of microarray data

    KAUST Repository

    Wang, Yuedong

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  17. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  18. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    Science.gov (United States)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  19. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  20. Parameter estimation and prediction of nonlinear biological systems: some examples

    NARCIS (Netherlands)

    Doeswijk, T.G.; Keesman, K.J.

    2006-01-01

    Rearranging and reparameterizing a discrete-time nonlinear model with polynomial quotient structure in input, output and parameters (xk = f(Z, p)) leads to a model linear in its (new) parameters. As a result, the parameter estimation problem becomes a so-called errors-in-variables problem for which

  1. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  2. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  3. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  4. An Algorithm for Induction Motor Stator Flux Estimation

    Directory of Open Access Journals (Sweden)

    STOJIC, D. M.

    2012-08-01

    Full Text Available A new method for the induction motor stator flux estimation used in the sensorless IM drive applications is presented in this paper. Proposed algorithm advantageously solves problems associated with the pure integration, commonly used for the stator flux estimation. An observer-based structure is proposed based on the stator flux vector stationary state, in order to eliminate the undesired DC offset component present in the integrator based stator flux estimates. By using a set of simulation runs it is shown that the proposed algorithm enables the DC-offset free stator flux estimated for both low and high stator frequency induction motor operation.

  5. On lumped models for thermodynamic properties of simulated annealing problems

    International Nuclear Information System (INIS)

    Andresen, B.; Pedersen, J.M.; Salamon, P.; Hoffmann, K.H.; Mosegaard, K.; Nulton, J.

    1987-01-01

    The paper describes a new method for the estimation of thermodynamic properties for simulated annealing problems using data obtained during a simulated annealing run. The method works by estimating energy-to-energy transition probabilities and is well adapted to simulations such as simulated annealing, in which the system is never in equilibrium. (orig.)

  6. Well-posedness of nonlocal parabolic differential problems with dependent operators.

    Science.gov (United States)

    Ashyralyev, Allaberen; Hanalyev, Asker

    2014-01-01

    The nonlocal boundary value problem for the parabolic differential equation v'(t) + A(t)v(t) = f(t) (0 ≤ t ≤ T), v(0) = v(λ) + φ, 0 exact estimates in Hölder norms for the solution of two nonlocal boundary value problems for parabolic equations with dependent coefficients are established.

  7. Taking the Evolutionary Road to Developing an In-House Cost Estimate

    Science.gov (United States)

    Jacintho, David; Esker, Lind; Herman, Frank; Lavaque, Rodolfo; Regardie, Myma

    2011-01-01

    This slide presentation reviews the process and some of the problems and challenges of developing an In-House Cost Estimate (IHCE). Using as an example the Space Network Ground Segment Sustainment (SGSS) project, the presentation reviews the phases for developing a Cost estimate within the project to estimate government and contractor project costs to support a budget request.

  8. Time Delay Estimation Algoritms for Echo Cancellation

    Directory of Open Access Journals (Sweden)

    Kirill Sakhnov

    2011-01-01

    Full Text Available The following case study describes how to eliminate echo in a VoIP network using delay estimation algorithms. It is known that echo with long transmission delays becomes more noticeable to users. Thus, time delay estimation, as a part of echo cancellation, is an important topic during transmission of voice signals over packetswitching telecommunication systems. An echo delay problem associated with IP-based transport networks is discussed in the following text. The paper introduces the comparative study of time delay estimation algorithm, used for estimation of the true time delay between two speech signals. Experimental results of MATLab simulations that describe the performance of several methods based on cross-correlation, normalized crosscorrelation and generalized cross-correlation are also presented in the paper.

  9. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  10. iBEST: a program for burnup history estimation of spent fuels based on ORIGEN-S

    International Nuclear Information System (INIS)

    Kim, Do Yeon; Hong, Ser Gi; Ahn, Gil Hoon

    2015-01-01

    In this paper, we describe a computer program, iBEST (inverse Burnup ESTimator), that we developed to accurately estimate the burnup histories of spent nuclear fuels based on sample measurement data. The burnup history parameters include initial uranium enrichment, burnup, cooling time after discharge from reactor, and reactor type. The program uses algebraic equations derived using the simplified burnup chains of major actinides for initial estimations of burnup and uranium enrichment, and it uses the ORIGEN-S code to correct its initial estimations for improved accuracy. In addition, we newly developed a stable bisection method coupled with ORIGEN-S to correct burnup and enrichment values and implemented it in iBEST in order to fully take advantage of the new capabilities of ORIGEN-S for improving accuracy. The iBEST program was tested using several problems for verification and well-known realistic problems with measurement data from spent fuel samples from the Mihama-3 reactor for validation. The test results show that iBEST accurately estimates the burnup history parameters for the test problems and gives an acceptable level of accuracy for the realistic Mihama-3 problems

  11. Distributed Dynamic State Estimation with Extended Kalman Filter

    Energy Technology Data Exchange (ETDEWEB)

    Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry

    2011-08-04

    Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.

  12. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    Science.gov (United States)

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  13. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  14. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Small area estimation (SAE) model: Case study of poverty in West Java Province

    Science.gov (United States)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  16. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    Science.gov (United States)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  17. PROBABILISTIC ESTIMATION OF VIBRATION INFLUENCE ON SENSITIVE SYSTEM ELEMENTS

    Directory of Open Access Journals (Sweden)

    A. A. Lobaty

    2009-01-01

    Full Text Available The paper considers a problem pertaining to an estimation of vibration influence on sensitive system elements. Dependences of intensity and probability of a process exit characterizing a condition of a system element for the preset range that allow to estimate serviceability and no-failure operation of the system have been obtained analytically in the paper

  18. Development of regional stump-to-mill logging cost estimators

    Science.gov (United States)

    Chris B. LeDoux; John E. Baumgras

    1989-01-01

    Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...

  19. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    Science.gov (United States)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  20. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  1. Unilateral contact problems variational methods and existence theorems

    CERN Document Server

    Eck, Christof; Krbec, Miroslav

    2005-01-01

    The mathematical analysis of contact problems, with or without friction, is an area where progress depends heavily on the integration of pure and applied mathematics. This book presents the state of the art in the mathematical analysis of unilateral contact problems with friction, along with a major part of the analysis of dynamic contact problems without friction. Much of this monograph emerged from the authors'' research activities over the past 10 years and deals with an approach proven fruitful in many situations. Starting from thin estimates of possible solutions, this approach is based on an approximation of the problem and the proof of a moderate partial regularity of the solution to the approximate problem. This in turn makes use of the shift (or translation) technique - an important yet often overlooked tool for contact problems and other nonlinear problems with limited regularity. The authors pay careful attention to quantification and precise results to get optimal bounds in sufficient conditions f...

  2. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  3. VERTICAL ACTIVITY ESTIMATION USING 2D RADAR

    African Journals Online (AJOL)

    hennie

    estimates on aircraft vertical behaviour from a single 2D radar track. ... Fortunately, the problem of detecting relative vertical motion using a single 2D ..... awareness tools in scenarios where aerial activity sensing is typically limited to 2D.

  4. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  5. Cost estimating issues in the Russian integrated system planning context

    International Nuclear Information System (INIS)

    Allentuck, J.

    1996-01-01

    An important factor in the credibility of an optimal capacity expansion plan is the accuracy of cost estimates given the uncertainty of future economic conditions. This paper examines the problems associated with estimating investment and operating costs in the Russian nuclear power context over the period 1994 to 2010

  6. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  7. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…

  8. H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.

  9. EL COOPERATIVISMO VITIVINíCOLA EN LA UNIóN EUROPEA Y ESPAñA. UN ESTUDIO EXPLORATORIO EN LA DENOMINACIóN DE ORIGEN DE ALICANTE/THE WINE GROWING COOPERATIVISM IN THE EUROPEAN UNION AND SPAIN. AN EXPLORATORY STUDY IN THE ORIGIN DENOMINATION OF ALICANTE

    Directory of Open Access Journals (Sweden)

    Amparo MELIÁN NAVARRO

    2007-09-01

    Full Text Available El cooperativismo vitivinícola es una importante realidad en los países de la Unión Europea, sobre todo en Francia e Italia donde se ubican las principales bodegas cooperativas europeas. En este trabajo se efectúa una caracterización del cooperativismo vitivinícola en la Unión Europea y España, con especial interés en una zona geográfica determinada, la correspondiente a la Denominación de Origen (D.O. Alicante, donde se realiza un estudio exploratorio a nivel de significación y representatividad de las bodegas cooperativas frente al total de empresas vitivinícolas (S.A., S.L. y empresas particulares en las principales magnitudes de producción y comercialización. Por otra parte se presenta un estudio empírico, centrado en un análisis bivariante basado en una encuesta realizada a las bodegas de la D.O. Alicante durante el periodo de marzo a junio de 2007, con la finalidad de conocer el sector desde la perspectiva de la oferta./The wine growing cooperativism is an important fact in the European countries, overall in France and Italy where the main European cooperative wine cellars. In this study a portrayal of cooperativism in the European Union and Spain, with special interest in a concrete geographic area, the one referred to the Origin Denomination of Alicante where a exploratory study in a level of signification and representativeness in the cooperative wine cellars in opposition to the wine growing companies (S.A. and S.L. and private companies in the main magnitudes of production and marketing. On the other hand, an empiric study is presented, focused on a bivariant analysis based on a survey carried out in the wine cellars of the Origin Denomination of Alicante for a period from March to June of 2007, with the aim of knowing the sector from an offer perspective.

  10. Radiology and radiation protection. Present-day problems

    International Nuclear Information System (INIS)

    Andrieu, L.

    1978-01-01

    With the development of nuclear energy the present-day problems of radioprotection are studied in the light of new radiobiological knowledge. The following points are analysed in turn: radioprotection norms, the notion of acceptable risk; influence of dose rate and fractionation; the low-dose problem; relative biological effectiveness (RBE) and quality factor (Q.F.); the biological problem of long-term effects. The genetic risk due to accepted radioprotection norms is estimated. The part played by radioprotection organisations is underlined, with emphasis on the fact that radioactivity is the most strictly and effectively regulated of all industrial inconveniences. It is pointed out that medical irradiation is not subject to the legislations and regulations listed [fr

  11. Identification problems in linear transformation system

    International Nuclear Information System (INIS)

    Delforge, Jacques.

    1975-01-01

    An attempt was made to solve the theoretical and numerical difficulties involved in the identification problem relative to the linear part of P. Delattre's theory of transformation systems. The theoretical difficulties are due to the very important problem of the uniqueness of the solution, which must be demonstrated in order to justify the value of the solution found. Simple criteria have been found when measurements are possible on all the equivalence classes, but the problem remains imperfectly solved when certain evolution curves are unknown. The numerical difficulties are of two kinds: a slow convergence of iterative methods and a strong repercussion of numerical and experimental errors on the solution. In the former case a fast convergence was obtained by transformation of the parametric space, while in the latter it was possible, from sensitivity functions, to estimate the errors, to define and measure the conditioning of the identification problem then to minimize this conditioning as a function of the experimental conditions [fr

  12. Regression tools for CO2 inversions: application of a shrinkage estimator to process attribution

    International Nuclear Information System (INIS)

    Shaby, Benjamin A.; Field, Christopher B.

    2006-01-01

    In this study we perform an atmospheric inversion based on a shrinkage estimator. This method is used to estimate surface fluxes of CO 2 , first partitioned according to constituent geographic regions, and then according to constituent processes that are responsible for the total flux. Our approach differs from previous approaches in two important ways. The first is that the technique of linear Bayesian inversion is recast as a regression problem. Seen as such, standard regression tools are employed to analyse and reduce errors in the resultant estimates. A shrinkage estimator, which combines standard ridge regression with the linear 'Bayesian inversion' model, is introduced. This method introduces additional bias into the model with the aim of reducing variance such that errors are decreased overall. Compared with standard linear Bayesian inversion, the ridge technique seems to reduce both flux estimation errors and prediction errors. The second divergence from previous studies is that instead of dividing the world into geographically distinct regions and estimating the CO 2 flux in each region, the flux space is divided conceptually into processes that contribute to the total global flux. Formulating the problem in this manner adds to the interpretability of the resultant estimates and attempts to shed light on the problem of attributing sources and sinks to their underlying mechanisms

  13. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    Science.gov (United States)

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  14. Estimating rare events in biochemical systems using conditional sampling

    Science.gov (United States)

    Sundar, V. S.

    2017-01-01

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  15. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  16. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  17. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  18. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    Science.gov (United States)

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  19. Experimental design and estimation of growth rate distributions in size-structured shrimp populations

    International Nuclear Information System (INIS)

    Banks, H T; Davis, Jimena L; Ernstberger, Stacey L; Hu, Shuhua; Artimovich, Elena; Dhar, Arun K

    2009-01-01

    We discuss inverse problem results for problems involving the estimation of probability distributions using aggregate data for growth in populations. We begin with a mathematical model describing variability in the early growth process of size-structured shrimp populations and discuss a computational methodology for the design of experiments to validate the model and estimate the growth-rate distributions in shrimp populations. Parameter-estimation findings using experimental data from experiments so designed for shrimp populations cultivated at Advanced BioNutrition Corporation are presented, illustrating the usefulness of mathematical and statistical modeling in understanding the uncertainty in the growth dynamics of such populations

  20. Channel Estimation and Information Symbol Detection for DS-UWB Communication Systems

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    estimation, the one-step predictor of information symbol is used and the estimation error is also considered as a multiplicative noise. The solutions to the above two problems are obtained by solving a couple of Riccati equations together with two Lyapunov equations.