WorldWideScience

Sample records for denominator problem estimating

  1. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  2. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  3. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  4. Information criteria to estimate hyperparameters in groundwater inverse problems

    Science.gov (United States)

    Zanini, A.; Tanda, M. G.; Woodbury, A. D.

    2017-12-01

    One of the main issues in groundwater modeling is the knowledge of the hydraulic parameters such as transmissivity and storativity. In literature there are several efficacious inverse methods that are able to estimate these unknown properties. Most methods assume, as a priori knowledge, the form of the variogram (or covariance function) of the unknown parameters. The hyperparameters of the variogram (or covariance function) can be inferred from observations, assumed known or estimated. Information criteria are widely used in inverse problems in several disciplines (such as geophysics, hydrology, ...) to estimate the hyperparameters. In this work, in order to estimate the hyperparameters, we consider the Akaike Information Criterion (AIC) and the Akaike Bayesian Information Criterion (ABIC). AIC is computed as -2 ln[fitted model]+2 number of unknown parameters. The iterative procedure allows to identify the hyperparameters that minimize the AIC. The ABIC is similar to the AIC in form and is computed in terms of the Bayesian likelihood; it is appropriate when prior information is considered in the form of prior probability. ABIC = -2 ln[predictive distribution]+2 (number of hyperparameters). The predictive distribution is the normalizing constant that is at the denominator of the Bayes theorem and represents the pdf of observing the data with the uncertainty in the model parameters marginalized out of consideration. The correct hyperparameters are evaluated at the minimum value of the ABIC. In this work we compare the results obtained from AIC to ABIC, using a literature example and we describe pros and cons of the two approaches.

  5. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-01

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded

  6. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  7. An Entropic Estimator for Linear Inverse Problems

    Directory of Open Access Journals (Sweden)

    Amos Golan

    2012-05-01

    Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.

  8. 31 CFR 360.48 - Restrictions on reissue; denominational exchange.

    Science.gov (United States)

    2010-07-01

    ...; denominational exchange. 360.48 Section 360.48 Money and Finance: Treasury Regulations Relating to Money and... GOVERNING DEFINITIVE UNITED STATES SAVINGS BONDS, SERIES I Reissue and Denominational Exchange § 360.48 Restrictions on reissue; denominational exchange. Reissue is not permitted solely to change denominations. ...

  9. Two denominators for one numerator: the example of neonatal mortality.

    Science.gov (United States)

    Harmon, Quaker E; Basso, Olga; Weinberg, Clarice R; Wilcox, Allen J

    2018-06-01

    Preterm delivery is one of the strongest predictors of neonatal mortality. A given exposure may increase neonatal mortality directly, or indirectly by increasing the risk of preterm birth. Efforts to assess these direct and indirect effects are complicated by the fact that neonatal mortality arises from two distinct denominators (i.e. two risk sets). One risk set comprises fetuses, susceptible to intrauterine pathologies (such as malformations or infection), which can result in neonatal death. The other risk set comprises live births, who (unlike fetuses) are susceptible to problems of immaturity and complications of delivery. In practice, fetal and neonatal sources of neonatal mortality cannot be separated-not only because of incomplete information, but because risks from both sources can act on the same newborn. We use simulations to assess the repercussions of this structural problem. We first construct a scenario in which fetal and neonatal factors contribute separately to neonatal mortality. We introduce an exposure that increases risk of preterm birth (and thus neonatal mortality) without affecting the two baseline sets of neonatal mortality risk. We then calculate the apparent gestational-age-specific mortality for exposed and unexposed newborns, using as the denominator either fetuses or live births at a given gestational age. If conditioning on gestational age successfully blocked the mediating effect of preterm delivery, then exposure would have no effect on gestational-age-specific risk. Instead, we find apparent exposure effects with either denominator. Except for prediction, neither denominator provides a meaningful way to define gestational-age-specific neonatal mortality.

  10. Accounting and marketing: searching a common denominator

    Directory of Open Access Journals (Sweden)

    David S. Murphy

    2012-06-01

    Full Text Available Accounting and marketing are very different disciplines. The analysis of customer profitability is one concept that can unite accounting and marketing as a common denominator. In this article I search for common ground between accounting and marketing in the analysis of customer profitability to determine if a common denominator really exists between the two. This analysis focuses on accounting profitability, customer lifetime value, and customer equity. The article ends with a summary of what accountants can do to move the analysis of customer value forward, as an analytical tool, within companies.

  11. Definition and denomination of occupations in libraries

    Directory of Open Access Journals (Sweden)

    Jelka Gazvoda

    1998-01-01

    Full Text Available In the first part of the article, the author presents the modern definition of occupation as defined in the ISCO-88 standard, and consecutively in the Slovenian Standard Classification of Occupations; occupations in the field of library and information science are then placed in a wider frame of information occupations which are present in ali spheres of activities. The following part of the article is focused on information occupations in libraries, especially on their contents definitions and denominations.Based on the analysis of job descriptions in three Slovenian libraries (National and University Library, University Library of Maribor and Central Technical Library,the author came to the following conclusion: the existent practice in libraries shows that the contents and denominations of occupations in library and information jobs are defined too loosely. In most cases, the contents of occupation is defined by the contents of the job, while for its denomination the required educational title of the employee is often used. Therefore, the author proposes the establishment of a work force which would define the contents and design denominations to library and information occupations according to the principles contained in the Standard Classification of Occupations.

  12. 31 CFR 309.3 - Denominations and exchange.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Denominations and exchange. 309.3... Denominations and exchange. Treasury bills will be issued in denominations (maturity value) of $10,000, $15,000, $50,000, $100,000, $500,000, and $1,000,000. Exchanges from higher to lower and lower to higher...

  13. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  14. The Problem With Estimating Public Health Spending.

    Science.gov (United States)

    Leider, Jonathon P

    2016-01-01

    Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to

  15. Measuring HPV vaccination coverage in Australia: comparing two alternative population-based denominators.

    Science.gov (United States)

    Barbaro, Bianca; Brotherton, Julia M L

    2015-08-01

    To compare the use of two alternative population-based denominators in calculating HPV vaccine coverage in Australia by age groups, jurisdiction and remoteness areas. Data from the National HPV Vaccination Program Register (NHVPR) were analysed at Local Government Area (LGA) level, by state/territory and by the Australian Standard Geographical Classification Remoteness Structure. The proportion of females vaccinated was calculated using both the ABS ERP and Medicare enrolments as the denominator. HPV vaccine coverage estimates were slightly higher using Medicare enrolments than using the ABS estimated resident population nationally (70.8% compared with 70.4% for 12 to 17-year-old females, and 33.3% compared with 31.9% for 18 to 26-year-old females, respectively.) The greatest differences in coverage were found in the remote areas of Australia. There is minimal difference between coverage estimates made using the two denominators except in Remote and Very Remote areas where small residential populations make interpretation more difficult. Adoption of Medicare enrolments for the denominator in the ongoing program would make minimal, if any, difference to routine coverage estimates. © 2015 Public Health Association of Australia.

  16. Prospect evaluation as a function of numeracy and probability denominator.

    Science.gov (United States)

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Regularization and error estimates for nonhomogeneous backward heat problems

    Directory of Open Access Journals (Sweden)

    Duc Trong Dang

    2006-01-01

    Full Text Available In this article, we study the inverse time problem for the non-homogeneous heat equation which is a severely ill-posed problem. We regularize this problem using the quasi-reversibility method and then obtain error estimates on the approximate solutions. Solutions are calculated by the contraction principle and shown in numerical experiments. We obtain also rates of convergence to the exact solution.

  18. Carleman estimates and applications to inverse problems for hyperbolic systems

    CERN Document Server

    Bellassoued, Mourad

    2017-01-01

    This book is a self-contained account of the method based on Carleman estimates for inverse problems of determining spatially varying functions of differential equations of the hyperbolic type by non-overdetermining data of solutions. The formulation is different from that of Dirichlet-to-Neumann maps and can often prove the global uniqueness and Lipschitz stability even with a single measurement. These types of inverse problems include coefficient inverse problems of determining physical parameters in inhomogeneous media that appear in many applications related to electromagnetism, elasticity, and related phenomena. Although the methodology was created in 1981 by Bukhgeim and Klibanov, its comprehensive development has been accomplished only recently. In spite of the wide applicability of the method, there are few monographs focusing on combined accounts of Carleman estimates and applications to inverse problems. The aim in this book is to fill that gap. The basic tool is Carleman estimates, the theory of wh...

  19. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    Science.gov (United States)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  20. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    KAUST Repository

    Dashti, M.

    2013-09-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.

  1. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    International Nuclear Information System (INIS)

    Dashti, M; Law, K J H; Stuart, A M; Voss, J

    2013-01-01

    We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map G applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ 0 . We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μ y . Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager–Machlup functional defined on the Cameron–Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of G(u) can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier–Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. (paper)

  2. Bounds and estimates for the linearly perturbed eigenvalue problem

    International Nuclear Information System (INIS)

    Raddatz, W.D.

    1983-01-01

    This thesis considers the problem of bounding and estimating the discrete portion of the spectrum of a linearly perturbed self-adjoint operator, M(x). It is supposed that one knows an incomplete set of data consisting in the first few coefficients of the Taylor series expansions of one or more of the eigenvalues of M(x) about x = 0. The foundations of the variational study of eigen-values are first presented. These are then used to construct the best possible upper bounds and estimates using various sets of given information. Lower bounds are obtained by estimating the error in the upper bounds. The extension of these bounds and estimates to the eigenvalues of the doubly-perturbed operator M(x,y) is discussed. The results presented have numerous practical application in the physical sciences, including problems in atomic physics and the theory of vibrations of acoustical and mechanical systems

  3. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  4. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  5. Estimates for lower order eigenvalues of a clamped plate problem

    OpenAIRE

    Cheng, Qing-Ming; Huang, Guangyue; Wei, Guoxin

    2009-01-01

    For a bounded domain $\\Omega$ in a complete Riemannian manifold $M^n$, we study estimates for lower order eigenvalues of a clamped plate problem. We obtain universal inequalities for lower order eigenvalues. We would like to remark that our results are sharp.

  6. Wine quality, reputation, denominations: How cooperatives and private wineries compete?

    Directory of Open Access Journals (Sweden)

    Schamel Guenter H.

    2014-01-01

    Full Text Available We analyze how cooperatives in Northern Italy (Alto Adige and Trentino compete with private wineries regarding product quality and reputation, i.e. if firm organization affects wine quality and winery reputation. Moreover, we examine if cooperatives with deep roots in their local economy specialize in specific regional denomination rules (i.e. DOC, IGT. Compared to private wineries, cooperatives face additional challenges in order to raise wine quality, among them appropriate incentives that induce individual growers to supply high quality grapes (e.g. vineyard management and grape pricing schemes to lower yields. The quality reputation of a winery with consumers depends crucially on its winemaking skills. Wine regions differ with respect to climatic conditions and quality denomination rules. Assuming similar climatic conditions within wine regions as well as winemaking skills between firms, incentive schemes to induce individual growers to supply high quality grapes and quality denomination rules remain crucial determinants of wine quality and winery reputation when comparing different regions and firm organizational forms. The data set analyzed allows differentiating local cooperatives vs. private wineries and denotes retail prices, wine quality evaluations, indicators for winery reputation, and distinct denomination rules. We employ a hedonic pricing model in order to test the following hypothesis: First, wines produced by cooperatives suffer a significant reputation and/or wine quality discount relative to wines from private producers. Second, cooperatives and/or private wineries specialize in specific wine denominations for which they receive a price premium relative the competing organizational form. Our results are mixed. However, we reject the hypothesis that cooperatives suffer a reputation/wine quality discount relative to private producers for the Alto Adige wine region. Moreover, we find that regional cooperatives and private

  7. Empirical Estimates in Economic and Financial Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Houda, Michal; Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 29 (2012), s. 50-69 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/1610; GA ČR GAP402/11/0150; GA ČR GAP402/10/0956 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * empirical estimates * moment generating functions * stability * Wasserstein metric * L1-norm * Lipschitz property * consistence * convergence rate * normal distribution * Pareto distribution * Weibull distribution * distribution tails * simulation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2012/E/houda-empirical estimates in economic and financial optimization problems.pdf

  8. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  9. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    Science.gov (United States)

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  10. Using Supervised Deep Learning for Human Age Estimation Problem

    Science.gov (United States)

    Drobnyh, K. A.; Polovinkin, A. N.

    2017-05-01

    Automatic facial age estimation is a challenging task upcoming in recent years. In this paper, we propose using the supervised deep learning features to improve an accuracy of the existing age estimation algorithms. There are many approaches solving the problem, an active appearance model and the bio-inspired features are two of them which showed the best accuracy. For experiments we chose popular publicly available FG-NET database, which contains 1002 images with a broad variety of light, pose, and expression. LOPO (leave-one-person-out) method was used to estimate the accuracy. Experiments demonstrated that adding supervised deep learning features has improved accuracy for some basic models. For example, adding the features to an active appearance model gave the 4% gain (the error decreased from 4.59 to 4.41).

  11. The Contrastive Study of Igbo and English Denominal Nouns ...

    African Journals Online (AJOL)

    The teaching of nominalization has not been all smooth for an Igbo second language learner of English language. That is why this study is set to contrast English and Igbo Denominal nouns. The objective is to find out the similarities and differences between the nominalization process in Igbo and that of the English ...

  12. The Effects of Denomination on Religious Socialization for Jewish Youth

    Science.gov (United States)

    James, Anthony G.; Lester, Ashlie M.; Brooks, Greg

    2014-01-01

    The transmission model of religious socialization was tested using a sample of American Jewish parents and adolescents. The authors expected that measures of religiousness among parents would be associated with those among their children. Interaction effects of denominational membership were also tested. Data were collected from a sample of 233…

  13. Using tactile features to help functionally blind individuals denominate banknotes.

    Science.gov (United States)

    Lederman, Susan J; Hamilton, Cheryl

    2002-01-01

    This study, which was conducted for the Bank of Canada, assessed the feasibility of presenting a raised texture feature together with a tactile denomination code on the next Canadian banknote series ($5, $10, $20, $50, and $100). Adding information accessible by hand would permit functionally blind individuals to independently denominate banknotes. In Experiment 1, 20 blindfolded, sighted university students denominated a set of 8 alternate tactile feature designs. Across the 8 design series, the proportion of correct responses never fell below .97; the mean response time per banknote ranged from 11.4 to 13.1 s. In Experiment 2, 27 functionally blind participants denominated 4 of the previous 8 candidate sets of banknotes. The proportion of correct responses never fell below .92; the corresponding mean response time per banknote ranged from 11.7 to 13.0 s. The Bank of Canada selected one of the four raised-texture designs for inclusion on its new banknote series. Other potential applications include designing haptic displays for teleoperation and virtual environment systems.

  14. Estimates for mild solutions to semilinear Cauchy problems

    Directory of Open Access Journals (Sweden)

    Kresimir Burazin

    2014-09-01

    Full Text Available The existence (and uniqueness results on mild solutions of the abstract semilinear Cauchy problems in Banach spaces are well known. Following the results of Tartar (2008 and Burazin (2008 in the case of decoupled hyperbolic systems, we give an alternative proof, which enables us to derive an estimate on the mild solution and its time of existence. The nonlinear term in the equation is allowed to be time-dependent. We discuss the optimality of the derived estimate by testing it on three examples: the linear heat equation, the semilinear heat equation that models dynamic deflection of an elastic membrane, and the semilinear Schrodinger equation with time-dependent nonlinearity, that appear in the modelling of numerous physical phenomena.

  15. Gaming and Religion: The Impact of Spirituality and Denomination.

    Science.gov (United States)

    Braun, Birgit; Kornhuber, Johannes; Lenz, Bernd

    2016-08-01

    A previous investigation from Korea indicated that religion might modulate gaming behavior (Kim and Kim in J Korean Acad Nurs 40:378-388, 2010). Our present study aimed to investigate whether a belief in God, practicing religious behavior and religious denomination affected gaming behavior. Data were derived from a Western cohort of young men (Cohort Study on Substance Use Risk Factors, n = 5990). The results showed that a stronger belief in God was associated with lower gaming frequency and smaller game addiction scale scores. In addition, practicing religiosity was related to less frequent online and offline gaming. Finally, Christians gamed less frequently and had lower game addiction scale scores than subjects without religious denomination. In the future, these results could prove useful in developing preventive and therapeutic strategies for the Internet gaming disorder.

  16. MAP estimators and their consistency in Bayesian nonparametric inverse problems

    KAUST Repository

    Dashti, M.; Law, K. J H; Stuart, A. M.; Voss, J.

    2013-01-01

    with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.

  17. Denominator function for canonical SU(3) tensor operators

    International Nuclear Information System (INIS)

    Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.

    1985-01-01

    The definition of a canonical unit SU(3) tensor operator is given in terms of its characteristic null space as determined by group-theoretic properties of the intertwining number. This definition is shown to imply the canonical splitting conditions used in earlier work for the explicit and unique (up to +- phases) construction of all SU(3) WCG coefficients (Wigner--Clebsch--Gordan). Using this construction, an explicit SU(3)-invariant denominator function characterizing completely the canonically defined WCG coefficients is obtained. It is shown that this denominator function (squared) is a product of linear factors which may be obtained explicitly from the characteristic null space times a ratio of polynomials. These polynomials, denoted G/sup t//sub q/, are defined over three (shift) parameters and three barycentric coordinates. The properties of these polynomials (hence, of the corresponding invariant denominator function) are developed in detail: These include a derivation of their degree, symmetries, and zeros. The symmetries are those induced on the shift parameters and barycentric coordinates by the transformations of a 3 x 3 array under row interchange, column interchange, and transposition (the group of 72 operations leaving a 3 x 3 determinant invariant). Remarkably, the zeros of the general G/sup t//sub q/ polynomial are in position and multiplicity exactly those of the SU(3) weight space associated with irreducible representation [q-1,t-1,0]. The results obtained are an essential step in the derivation of a fully explicit and comprehensible algebraic expression for all SU(3) WCG coefficients

  18. Parameter Estimation as a Problem in Statistical Thermodynamics.

    Science.gov (United States)

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  19. Bayesian Simultaneous Estimation for Means in k Sample Problems

    OpenAIRE

    Imai, Ryo; Kubokawa, Tatsuya; Ghosh, Malay

    2017-01-01

    This paper is concerned with the simultaneous estimation of k population means when one suspects that the k means are nearly equal. As an alternative to the preliminary test estimator based on the test statistics for testing hypothesis of equal means, we derive Bayesian and minimax estimators which shrink individual sample means toward a pooled mean estimator given under the hypothesis. Interestingly, it is shown that both the preliminary test estimator and the Bayesian minimax shrinkage esti...

  20. Defining risk groups to yellow fever vaccine-associated viscerotropic disease in the absence of denominator data.

    Science.gov (United States)

    Seligman, Stephen J; Cohen, Joel E; Itan, Yuval; Casanova, Jean-Laurent; Pezzullo, John C

    2014-02-01

    Several risk groups are known for the rare but serious, frequently fatal, viscerotropic reactions following live yellow fever virus vaccine (YEL-AVD). Establishing additional risk groups is hampered by ignorance of the numbers of vaccinees in factor-specific risk groups thus preventing their use as denominators in odds ratios (ORs). Here, we use an equation to calculate ORs using the prevalence of the factor-specific risk group in the population who remain well. The 95% confidence limits and P values can also be calculated. Moreover, if the estimate of the prevalence is imprecise, discrimination analysis can indicate the prevalence at which the confidence interval results in an OR of ∼1 revealing if the prevalence might be higher without yielding a non-significant result. These methods confirm some potential risk groups for YEL-AVD and cast doubt on another. They should prove useful in situations in which factor-specific risk group denominator data are not available.

  1. Transport-constrained extensions of collision and track length estimators for solutions of radiative transport problems

    International Nuclear Information System (INIS)

    Kong, Rong; Spanier, Jerome

    2013-01-01

    In this paper we develop novel extensions of collision and track length estimators for the complete space-angle solutions of radiative transport problems. We derive the relevant equations, prove that our new estimators are unbiased, and compare their performance with that of more conventional estimators. Such comparisons based on numerical solutions of simple one dimensional slab problems indicate the the potential superiority of the new estimators for a wide variety of more general transport problems

  2. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2012-01-01

    We adopt a two-stage Method of Simulated Moments to estimate the preference parameters in a life-cycle consumption-saving model augmented with temptation disutility. Our approach estimates the parameters from the comparison between simulated moments with empirical moments observed in the US Survey

  3. Complex leadership as a way forward for transformational missional leadership in a denominational structure

    Directory of Open Access Journals (Sweden)

    C.J.P. (Nelus Niemandt

    2015-08-01

    Full Text Available The research investigates the role of leadership in the transformation of denominational structures towards a missional ecclesiology, and focusses on the Highveld Synod of the Dutch Reformed Church. It describes the missional journey of the denomination, and interprets the transformation. The theory of ‘complex leadership’ in complex systems is applied to the investigation of the impact of leadership on a denominational structure. The theory identifies three mechanisms used by leaders as enablers in emergent, self-organisation systems: (1 Leaders disrupt existing patterns, (2 they encourage novelty, and (3 they act as sensemakers. These insights are applied as a tool to interpret the missional transformation of a denomination.

  4. ACTUAL PROBLEMS OF THE ESTIMATION OF COMPETITIVENESS OF THE BRAND

    Directory of Open Access Journals (Sweden)

    Sevostyanova O. G.

    2016-03-01

    Full Text Available The increase in a share of brand sales in the market predetermines the big spectrum of application of estimated cost of a brand as major of company actives. In article advantages and lacks of private techniques of estimation of cost of brand Interbrand and V-RATIO are analyzed. It is shown, that definition of cost characteristics of a brand is a valuable source of the information at strategic management of the company, hence, the major making competitiveness of trade enterprise.

  5. Problems of estimation of water content history of loesses

    International Nuclear Information System (INIS)

    Rendell, H.M.

    1983-01-01

    The estimation of 'mean water content' is a major source of error in the TL dating of many sediments. The engineering behaviour of loesses can be used, under certain circumstances, to interfer their content history. The construction of 'stress history' for particular loesses is therefore proposed in order to establish the critical conditions of moisture and applied stress (overburden) at which irreversible structural change occurs. A programme of field and laboratory tests should enable more precise estimates of water content history to be made. (author)

  6. A posteriori error estimates for axisymmetric and nonlinear problems

    Czech Academy of Sciences Publication Activity Database

    Křížek, Michal; Němec, J.; Vejchodský, Tomáš

    2001-01-01

    Roč. 15, - (2001), s. 219-236 ISSN 1019-7168 R&D Projects: GA ČR GA201/01/1200; GA MŠk ME 148 Keywords : weigted Sobolev spaces%a posteriori error estimates%finite elements Subject RIV: BA - General Mathematics Impact factor: 0.886, year: 2001

  7. Measuring self-control problems: a structural estimation

    NARCIS (Netherlands)

    Bucciol, A.

    2009-01-01

    We perform a structural estimation of the preference parameters in a buffer-stock consumption model augmented with temptation disutility. We adopt a two-stage Method of Simulated Moments methodology to match our simulated moments with those observed in the US Survey of Consumer Finances. To identify

  8. Coefficient Estimate Problem for a New Subclass of Biunivalent Functions

    OpenAIRE

    N. Magesh; T. Rosy; S. Varma

    2013-01-01

    We introduce a unified subclass of the function class Σ of biunivalent functions defined in the open unit disc. Furthermore, we find estimates on the coefficients |a2| and |a3| for functions in this subclass. In addition, many relevant connections with known or new results are pointed out.

  9. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  10. Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming

    KAUST Repository

    Claudel, Christian G.; Chamoin, Timothee; Bayen, Alexandre M.

    2014-01-01

    This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.

  11. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  12. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  13. Journal Impact Factor: Do the Numerator and Denominator Need Correction?

    Science.gov (United States)

    Liu, Xue-Li; Gai, Shuang-Shuang; Zhou, Jing

    2016-01-01

    To correct the incongruence of document types between the numerator and denominator in the traditional impact factor (IF), we make a corresponding adjustment to its formula and present five corrective IFs: IFTotal/Total, IFTotal/AREL, IFAR/AR, IFAREL/AR, and IFAREL/AREL. Based on a survey of researchers in the fields of ophthalmology and mathematics, we obtained the real impact ranking of sample journals in the minds of peer experts. The correlations between various IFs and questionnaire score were analyzed to verify their journal evaluation effects. The results show that it is scientific and reasonable to use five corrective IFs for journal evaluation for both ophthalmology and mathematics. For ophthalmology, the journal evaluation effects of the five corrective IFs are superior than those of traditional IF: the corrective effect of IFAR/AR is the best, IFAREL/AR is better than IFTotal/Total, followed by IFTotal/AREL, and IFAREL/AREL. For mathematics, the journal evaluation effect of traditional IF is superior than those of the five corrective IFs: the corrective effect of IFTotal/Total is best, IFAREL/AR is better than IFTotal/AREL and IFAREL/AREL, and the corrective effect of IFAR/AR is the worst. In conclusion, not all disciplinary journal IF need correction. The results in the current paper show that to correct the IF of ophthalmologic journals may be valuable, but it seems to be meaningless for mathematic journals. PMID:26977697

  14. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

    1989-01-01

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  15. Audit of preventive activities in 16 inner London practices using a validated measure of patient population, the 'active patient' denominator. Healthy Eastenders Project.

    Science.gov (United States)

    Robson, J; Falshaw, M

    1995-01-01

    the practice computer. In contrast, 82% of recorded cervical smears were recorded on computer. CONCLUSION. The active patient denominator produces a more accurate estimate of population coverage and professional activity, both of which are underestimated by the complete, unexpurgated practice register. A standard definition of the denominator also allows comparisons to be made between practices and over time. As only half of the recordings of some preventive activities were recorded on computer, it is doubtful whether it is advisable to rely on computers for audit where paper records are also maintained. PMID:7546868

  16. Problems in radiation absorbed dose estimation from positron emitters

    International Nuclear Information System (INIS)

    Powell, G.F.; Harper, P.V.; Reft, C.S.; Chen, C.T.; Lathrop, K.A.

    1986-01-01

    The positron emitters commonly used in clinical imaging studies for the most part are short-lived, so that when they are distributed in the body the radiation absorbed dose is low even though most of the energy absorbed is from the positrons themselves rather than the annihilation radiation. These considerations do not apply to the administration pathway for a radiopharmaceutical where the activity may be highly concentrated for a brief period rather than distributed in the body. Thus, high local radiation absorbed doses to the vein for an intravenous administration and to the upper airways during administration by inhalation can be expected. For these geometries, beta point source functions (FPS's) have been employed to estimate the radiation absorbed dose in the present study. Physiologic measurements were done to determine other exposure parameters for intravenous administration of O-15 and Rb-82 and for administration of O-15-CO 2 by continuous breathing. Using FPS's to calculate dose rates to the vein wall from O-15 and Rb-82 injected into a vein having an internal radius of 1.5 mm yielded dose rates of 0.51 and 0.46 (rad x g/μCi x h), respectively. The dose gradient in the vein wall and surrounding tissues was also determined using FPS's. Administration of O-15-CO 2 by continuous breathing was also investigated. Using ultra-thin thermoluninescent dosimeters (TLD's) having the effective thickness of normal tracheal mucosa, experiments were performed in which 6 dosimeters were exposed to known concentrations of O-15 positrons in a hemicylindrical tracheal phantom having an internal radius of 0.96 cm and an effective length of 14 cm. The dose rate for these conditions was 3.4 (rads/h)/(μCi/cm 3 ). 15 references, 7 figures, 6 tables

  17. Are Improvements in Measured Performance Driven by Better Treatment or "Denominator Management"?

    Science.gov (United States)

    Harris, Alex H S; Chen, Cheng; Rubinsky, Anna D; Hoggatt, Katherine J; Neuman, Matthew; Vanneman, Megan E

    2016-04-01

    Process measures of healthcare quality are usually formulated as the number of patients who receive evidence-based treatment (numerator) divided by the number of patients in the target population (denominator). When the systems being evaluated can influence which patients are included in the denominator, it is reasonable to wonder if improvements in measured quality are driven by expanding numerators or contracting denominators. In 2003, the US Department of Veteran Affairs (VA) based executive compensation in part on performance on a substance use disorder (SUD) continuity-of-care quality measure. The first goal of this study was to evaluate if implementing the measure in this way resulted in expected improvements in measured performance. The second goal was to examine if the proportion of patients with SUD who qualified for the denominator contracted after the quality measure was implemented, and to describe the facility-level variation in and correlates of denominator contraction or expansion. Using 40 quarters of data straddling the implementation of the performance measure, an interrupted time series design was used to evaluate changes in two outcomes. All veterans with an SUD diagnosis in all VA facilities from fiscal year 2000 to 2009. The two outcomes were 1) measured performance-patients retained/patients qualified and 2) denominator prevalence-patients qualified/patients with SUD program contact. Measured performance improved over time (P management, and also the exploration of "shadow measures" to monitor and reduce undesirable denominator management.

  18. Estimation of G-renewal process parameters as an ill-posed inverse problem

    International Nuclear Information System (INIS)

    Krivtsov, V.; Yevkin, O.

    2013-01-01

    Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods

  19. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    Science.gov (United States)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  20. Variable effects of prevalence correction of population denominators on differentials in myocardial infarction incidence: a record linkage study in Aboriginal and non-Aboriginal Western Australians.

    Science.gov (United States)

    Katzenellenbogen, Judith M; Sanfilippo, Frank M; Hobbs, Michael S T; Briffa, Tom G; Ridout, Steve C; Knuiman, Matthew W; Dimer, Lyn; Taylor, Kate P; Thompson, Peter L; Thompson, Sandra C

    2011-06-01

    To investigate the impact of prevalence correction of population denominators on myocardial infarction (MI) incidence rates, rate ratios, and rate differences in Aboriginal vs. non-Aboriginal Western Australians aged 25-74 years during the study period 2000-2004. Person-based linked hospital and mortality data sets were used to estimate the number of prevalent and first-ever MI cases each year from 2000 to 2004 using a 15-year look-back period. Age-specific and -standardized MI incidence rates were calculated using both prevalence-corrected and -uncorrected population denominators, by sex and Aboriginality. The impact of prevalence correction on rates increased with age, was higher for men than women, and substantially greater for Aboriginal than non-Aboriginal people. Despite the systematic underestimation of incidence, prevalence correction had little impact on the Aboriginal to non-Aboriginal age-standardized rate ratios (6% and 4% underestimate in men and women, respectively), although the impact on rate differences was more marked (12% and 6%, respectively). The percentage underestimate of differentials was greater at older ages. Prevalence correction of denominators, while more accurate, is difficult to apply and may add modestly to the quantification of relative disparities in MI incidence between populations. Absolute incidence disparities using uncorrected denominators may have an error >10%. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem

    KAUST Repository

    Delaigle, Aurore

    2009-03-01

    Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.

  2. Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems

    Directory of Open Access Journals (Sweden)

    Oluyemi Oyeniran

    2016-01-01

    Full Text Available The problem of estimating the proportion, π0, of the true null hypotheses in a multiple testing problem is important in cases where large scale parallel hypotheses tests are performed independently. While the problem is a quantity of interest in its own right in applications, the estimate of π0 can be used for assessing or controlling an overall false discovery rate. In this article, we develop an innovative nonparametric maximum likelihood approach to estimate π0. The nonparametric likelihood is proposed to be restricted to multinomial models and an EM algorithm is also developed to approximate the estimate of π0. Simulation studies show that the proposed method outperforms other existing methods. Using experimental microarray datasets, we demonstrate that the new method provides satisfactory estimate in practice.

  3. Asymptotic Estimates and Qualitatives Properties of an Elliptic Problem in Dimension Two

    OpenAIRE

    Mehdi, Khalil El; Grossi, Massimo

    2003-01-01

    In this paper we study a semilinear elliptic problem on a bounded domain in $\\R^2$ with large exponent in the nonlinear term. We consider positive solutions obtained by minimizing suitable functionals. We prove some asymtotic estimates which enable us to associate a "limit problem" to the initial one. Usong these estimates we prove some quantitative properties of the solution, namely characterization of level sets and nondegeneracy.

  4. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    Science.gov (United States)

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  5. Upper estimates of complexity of algorithms for multi-peg Tower of Hanoi problem

    Directory of Open Access Journals (Sweden)

    Sergey Novikov

    2007-06-01

    Full Text Available There are proved upper explicit estimates of complexity of lgorithms: for multi-peg Tower of Hanoi problem with the limited number of disks, for Reve's puzzle and for $5$-peg Tower of Hanoi problem with the free number of disks.

  6. Global gradient estimates for divergence-type elliptic problems involving general nonlinear operators

    Science.gov (United States)

    Cho, Yumi

    2018-05-01

    We study nonlinear elliptic problems with nonstandard growth and ellipticity related to an N-function. We establish global Calderón-Zygmund estimates of the weak solutions in the framework of Orlicz spaces over bounded non-smooth domains. Moreover, we prove a global regularity result for asymptotically regular problems which are getting close to the regular problems considered, when the gradient variable goes to infinity.

  7. Estimation of the thermal properties in alloys as an inverse problem

    International Nuclear Information System (INIS)

    Zueco, J.; Alhama, F.

    2005-01-01

    This paper provides an efficient numerical method for estimating the thermal conductivity and heat capacity of alloys, as a function of the temperature, starting from temperature measurements (including errors) in heating and cooling processes. The proposed procedure is a modification of the known function estimation technique, typical of the inverse problem field, in conjunction with the network simulation method (already checked in many non-lineal problems) as the numerical tool. Estimations only require a point of measurement. The methodology is applied for determining these thermal properties in alloys within ranges of temperature where allotropic changes take place. These changes are characterized by sharp temperature dependencies. (Author) 13 refs

  8. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  9. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey; Bierig, Claudio

    2014-01-01

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  10. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  11. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Murat [Dept. of Mechanical Engineering, Bayburt University, Bayburt (Turkmenistan); Gundogdu, Omer [Dept. of Mechanical Engineering, Ataturk University, Erzurum (Turkmenistan)

    2017-01-15

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed.

  12. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    International Nuclear Information System (INIS)

    Balci, Murat; Gundogdu, Omer

    2017-01-01

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed

  13. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2016-01-01

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  14. Perturbation-Based Regularization for Signal Estimation in Linear Discrete Ill-posed Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-11-29

    Estimating the values of unknown parameters from corrupted measured data faces a lot of challenges in ill-posed problems. In such problems, many fundamental estimation methods fail to provide a meaningful stabilized solution. In this work, we propose a new regularization approach and a new regularization parameter selection approach for linear least-squares discrete ill-posed problems. The proposed approach is based on enhancing the singular-value structure of the ill-posed model matrix to acquire a better solution. Unlike many other regularization algorithms that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator which is the objective in many typical estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach also enjoys the lowest runtime and offers the highest level of robustness amongst all the tested benchmark regularization methods.

  15. PMU Placement Based on Heuristic Methods, when Solving the Problem of EPS State Estimation

    OpenAIRE

    I. N. Kolosok; E. S. Korkina; A. M. Glazunova

    2014-01-01

    Creation of satellite communication systems gave rise to a new generation of measurement equipment – Phasor Measurement Unit (PMU). Integrated into the measurement system WAMS, the PMU sensors provide a real picture of state of energy power system (EPS). The issues of PMU placement when solving the problem of EPS state estimation (SE) are discussed in many papers. PMU placement is a complex combinatorial problem, and there is not any analytical function to optimize its variables. Therefore,...

  16. Costs and benefits of proliferation of Christian denominations in ...

    African Journals Online (AJOL)

    The unbridled proliferation of Churches in Nigeria has steered up concerns among adherents of religious faiths, onlookers and academics alike. Nigerian society today is undergoing significant constant proliferation of Churches which has brought not only changing values, but also source of solutions to people's problems.

  17. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  18. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    DEFF Research Database (Denmark)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer Moesgaard

    2016-01-01

    problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter......State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible...

  19. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  20. The common denominator between DSM and power quality

    International Nuclear Information System (INIS)

    Porter, G.

    1993-01-01

    As utilities implement programs to push for energy efficiency, one of the results may be an increased population of end-uses that have a propensity to be sensitive to power delivery abnormalities. Some may view this as the price we have to pay for increasing the functionality of bulk 60 Hz power. Others may view this as a good reason to stay away from DSM. The utility industry must view this situation as an important element of their DSM planning and reflect the costs of mitigating potential power quality problems in the overall program. Power quality mitigation costs will not drastically add much to the total DSM bill, but the costs of poor power quality could definitely negate the positive benefits of new technologies. Failure to properly plan for the sensitivities of energy efficient equipment will be a major mistake considering the solutions are fairly well known. Proper understanding, education, design, and protection, using a systems approach to problem solving, will ensure that power quality problems won't force us to abandon beneficial efficiency improvement programs

  1. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    Science.gov (United States)

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  2. Solution of axisymmetric transient inverse heat conduction problems using parameter estimation and multi block methods

    International Nuclear Information System (INIS)

    Azimi, A.; Hannani, S.K.; Farhanieh, B.

    2005-01-01

    In this article, a comparison between two iterative inverse techniques to solve simultaneously two unknown functions of axisymmetric transient inverse heat conduction problems in semi complex geometries is presented. The multi-block structured grid together with blocked-interface nodes is implemented for geometric decomposition of physical domain. Numerical scheme for solution of transient heat conduction equation is the finite element method with frontal technique to solve algebraic system of discrete equations. The inverse heat conduction problem involves simultaneous unknown time varying heat generation and time-space varying boundary condition estimation. Two parameter-estimation techniques are considered, Levenberg-Marquardt scheme and conjugate gradient method with adjoint problem. Numerically computed exact and noisy data are used for the measured transient temperature data needed in the inverse solution. The results of the present study for a configuration including two joined disks with different heights are compared to those of exact heat source and temperature boundary condition, and show good agreement. (author)

  3. An inverse hyperbolic heat conduction problem in estimating surface heat flux by the conjugate gradient method

    International Nuclear Information System (INIS)

    Huang, C.-H.; Wu, H.-H.

    2006-01-01

    In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study

  4. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  5. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    Science.gov (United States)

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  6. An Alternative Route to Teaching Fraction Division: Abstraction of Common Denominator Algorithm

    Science.gov (United States)

    Zembat, Ismail Özgür

    2015-01-01

    From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting…

  7. Support for Homosexuals' Civil Liberties: The Influence of Familial Gender Role Attitudes across Religious Denominations

    Science.gov (United States)

    Kenneavy, Kristin

    2012-01-01

    Religious denominations vary in both their approach to the roles that men and women play in familial contexts, as well as their approach to homosexuality. This research investigates whether gender attitudes, informed by religious tradition, predict a person's support for civil liberties extended to gays and lesbians. Using data from the 1996 and…

  8. The Professionalisation of Non-Denominational Religious Education in England: Politics, Organisation and Knowledge

    Science.gov (United States)

    Parker, Stephen G.; Freathy, Rob; Doney, Jonathan

    2016-01-01

    In response to contemporary concerns, and using neglected primary sources, this article explores the professionalisation of teachers of Religious Education (RI/RE) in non-denominational, state-maintained schools in England. It does so from the launch of "Religion in Education" (1934) and the Institute for Christian Education at Home and…

  9. Currency Denomination of Bank Loans : Evidence from Small Firms in Transition Countries

    NARCIS (Netherlands)

    Brown, M.; Ongena, S.; Yesin, P.

    2008-01-01

    We examine the firm-level and country-level determinants of the currency denomination of small business loans. We introduce an information asymmetry between banks and firms in a model that also features the trade-off between the cost of debt and firm-level distress costs. Banks in our model don’t

  10. And who is your neighbor? Explaining denominational differences in charitable giving and volunteering in the Netherlands

    NARCIS (Netherlands)

    Bekkers, René; Schuyt, Theo

    We study differences in contributions of time and money to churches and non-religious nonprofit organizations between members of different religious denominations in the Netherlands. We hypothesize that contributions to religious organizations are based on involvement in the religious community,

  11. The European Convention on Human Rights & Parental Rights in Relation to Denominational Schooling

    NARCIS (Netherlands)

    J.D. Temperman (Jeroen)

    2017-01-01

    textabstractWhereas the bulk of religious education cases concerns aspects of the public school framework and curriculum, this article explores Convention rights in the realm of denominational schooling. It is outlined that the jurisprudence of the Strasbourg Court generally strongly supports the

  12. Parental Rights in Relation to Denominational Schooling under the European Convention on Human Rights

    NARCIS (Netherlands)

    J.D. Temperman (Jeroen)

    2017-01-01

    textabstractWhereas the bulk of Article 2 Protocol I cases concerns aspects of the public-school framework and curriculum, this article explores Convention rights in the realm of denominational schooling. It is outlined that the jurisprudence of the Strasbourg Court generally strongly supports the

  13. Cost estimation for solid waste management in industrialising regions – Precedents, problems and prospects

    International Nuclear Information System (INIS)

    Parthan, Shantha R.; Milke, Mark W.; Wilson, David C.; Cocks, John H.

    2012-01-01

    Highlights: ► We review cost estimation approaches for solid waste management. ► Unit cost method and benchmarking techniques used in industrialising regions (IR). ► Variety in scope, quality and stakeholders makes cost estimation challenging in IR. ► Integrate waste flow and cost models using cost functions to improve cost planning. - Abstract: The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories – the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.

  14. Empirical Estimates in Optimization Problems: Survey with Special Regard to Heavy Tails and Dependent Data

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2012-01-01

    Roč. 19, č. 30 (2012), s. 92-111 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150; GA ČR GAP402/10/1610 Institutional support: RVO:67985556 Keywords : Stochastic optimization * empirical estimates * thin and heavy tails * independent and weak dependent random samples Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/kankova-empirical estimates in optimization problems survey with special regard to heavy tails and dependent data.pdf

  15. The Expected Loss in the Discretization of Multistage Stochastic Programming Problems - Estimation and Convergence Rate

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2009-01-01

    Roč. 165, č. 1 (2009), s. 29-45 ISSN 0254-5330 R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : multistage stochastic programming problems * approximation * discretization * Monte Carlo Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.961, year: 2009 http://library.utia.cas.cz/separaty/2008/E/smid-the expected loss in the discretization of multistage stochastic programming problems - estimation and convergence rate.pdf

  16. Estimation of photosynthesis in cyanobacteria by pulse-amplitude modulation chlorophyll fluorescence: problems and solutions.

    Science.gov (United States)

    Ogawa, Takako; Misumi, Masahiro; Sonoike, Kintake

    2017-09-01

    Cyanobacteria are photosynthetic prokaryotes and widely used for photosynthetic research as model organisms. Partly due to their prokaryotic nature, however, estimation of photosynthesis by chlorophyll fluorescence measurements is sometimes problematic in cyanobacteria. For example, plastoquinone pool is reduced in the dark-acclimated samples in many cyanobacterial species so that conventional protocol developed for land plants cannot be directly applied for cyanobacteria. Even for the estimation of the simplest chlorophyll fluorescence parameter, F v /F m , some additional protocol such as addition of DCMU or illumination of weak blue light is necessary. In this review, those problems in the measurements of chlorophyll fluorescence in cyanobacteria are introduced, and solutions to those problems are given.

  17. Verification of functional a posteriori error estimates for obstacle problem in 1D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2013-01-01

    Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf

  18. Verification of functional a posteriori error estimates for obstacle problem in 2D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2014-01-01

    Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf

  19. Variational Multiscale error estimator for anisotropic adaptive fluid mechanic simulations: application to convection-diffusion problems

    OpenAIRE

    Bazile , Alban; Hachem , Elie; Larroya-Huguet , Juan-Carlos; Mesri , Youssef

    2018-01-01

    International audience; In this work, we present a new a posteriori error estimator based on the Variational Multiscale method for anisotropic adaptive fluid mechanics problems. The general idea is to combine the large scale error based on the solved part of the solution with the sub-mesh scale error based on the unresolved part of the solution. We compute the latter with two different methods: one using the stabilizing parameters and the other using bubble functions. We propose two different...

  20. Robust Wavelet Estimation to Eliminate Simultaneously the Effects of Boundary Problems, Outliers, and Correlated Noise

    Directory of Open Access Journals (Sweden)

    Alsaidi M. Altaher

    2012-01-01

    Full Text Available Classical wavelet thresholding methods suffer from boundary problems caused by the application of the wavelet transformations to a finite signal. As a result, large bias at the edges and artificial wiggles occur when the classical boundary assumptions are not satisfied. Although polynomial wavelet regression and local polynomial wavelet regression effectively reduce the risk of this problem, the estimates from these two methods can be easily affected by the presence of correlated noise and outliers, giving inaccurate estimates. This paper introduces two robust methods in which the effects of boundary problems, outliers, and correlated noise are simultaneously taken into account. The proposed methods combine thresholding estimator with either a local polynomial model or a polynomial model using the generalized least squares method instead of the ordinary one. A primary step that involves removing the outlying observations through a statistical function is considered as well. The practical performance of the proposed methods has been evaluated through simulation experiments and real data examples. The results are strong evidence that the proposed method is extremely effective in terms of correcting the boundary bias and eliminating the effects of outliers and correlated noise.

  1. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh

    2014-06-01

    We derive an easy-to-implement and efficient algorithm for solving multi-label image partitioning problems in the form of the problem addressed by Region Competition. These problems jointly determine a parameter for each of the regions in the partition. Given an estimate of the parameters, a fast approximate solution to the multi-label sub-problem is derived by a global update that uses smoothing and thresholding. The method is empirically validated to be robust to fine details of the image that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper! Experimental comparison to the state-of-the-art in multi-label solutions to Region Competition shows that our method achieves equal or better accuracy, with the main advantage being speed and ease of implementation.

  2. Ensemble Kalman Filtering with Residual Nudging: An Extension to State Estimation Problems with Nonlinear Observation Operators

    KAUST Repository

    Luo, Xiaodong

    2014-10-01

    The ensemble Kalman filter (EnKF) is an efficient algorithm for many data assimilation problems. In certain circumstances, however, divergence of the EnKF might be spotted. In previous studies, the authors proposed an observation-space-based strategy, called residual nudging, to improve the stability of the EnKF when dealing with linear observation operators. The main idea behind residual nudging is to monitor and, if necessary, adjust the distances (misfits) between the real observations and the simulated ones of the state estimates, in the hope that by doing so one may be able to obtain better estimation accuracy. In the present study, residual nudging is extended and modified in order to handle nonlinear observation operators. Such extension and modification result in an iterative filtering framework that, under suitable conditions, is able to achieve the objective of residual nudging for data assimilation problems with nonlinear observation operators. The 40-dimensional Lorenz-96 model is used to illustrate the performance of the iterative filter. Numerical results show that, while a normal EnKF may diverge with nonlinear observation operators, the proposed iterative filter remains stable and leads to reasonable estimation accuracy under various experimental settings.

  3. Multimodal Analysis of Estimated and Observed Social Competence in Preschoolers With/Without Behavior Problems

    Directory of Open Access Journals (Sweden)

    Talita Pereira Dias

    2013-05-01

    Full Text Available Social skills compete with behavior problems, and the combination of these aspects may cause differences in social competence. This study was aimed at assessing the differences and similarities in the social competence of 26 preschoolers resulting from: (1 groups which they belonged to, being one with social skills and three with behavior problems (internalizing, externalizing and mixed; (2 types of assessment, considering the estimates of mothers and teachers, as well as direct observation in a structured situation; (3 structured situations as demands for five categories of social skills. Children’s performance in each situation was assessed by judges and estimated by mothers and teachers. There was a similarity in the social competence estimated by mothers, teachers and in the performance observed. Only the teachers distinguished the groups (higher social competence in the group with social skills and lower in the internalizing and mixed groups. Assertiveness demands differentiated the groups. The methodological aspects were discussed, as well as the clinical and educational potential of the structured situations to promote social skills.

  4. FUNDAMENTAL MATRIX OF LINEAR CONTINUOUS SYSTEM IN THE PROBLEM OF ESTIMATING ITS TRANSPORT DELAY

    Directory of Open Access Journals (Sweden)

    N. A. Dudarenko

    2014-09-01

    Full Text Available The paper deals with the problem of quantitative estimation for transport delay of linear continuous systems. The main result is received by means of fundamental matrix of linear differential equations solutions specified in the normal Cauchy form for the cases of SISO and MIMO systems. Fundamental matrix has the dual property. It means that the weight function of the system can be formed as a free motion of systems. Last one is generated by the vector of initial system conditions, which coincides with the matrix input of the system being researched. Thus, using the properties of the system- solving for fundamental matrix has given the possibility to solve the problem of estimating transport linear continuous system delay without the use of derivation procedure in hardware environment and without formation of exogenous Dirac delta function. The paper is illustrated by examples. The obtained results make it possible to solve the problem of modeling the pure delay links using consecutive chain of aperiodic links of the first order with the equal time constants. Modeling results have proved the correctness of obtained computations. Knowledge of transport delay can be used when configuring multi- component technological complexes and in the diagnosis of their possible functional degeneration.

  5. Price Setting Transactions and the Role of Denominating Currency in FX Markets

    OpenAIRE

    Friberg, Richard; Wilander, Fredrik

    2007-01-01

    This report, commissioned by Sveriges Riksbank, examines the role of currency denomination in international trade transactions. It is divided in two parts. The first part consists of a survey of the price setting and payment practices of a large sample of Swedish exporting firms. The second part analyzes payments data from the Swedish settlement reports from 1999-2002. We examine whether invoicing patterns of Swedish and European companies changed following the creation of the EMU and how the...

  6. State and parameter estimation in nonlinear systems as an optimal tracking problem

    International Nuclear Information System (INIS)

    Creveling, Daniel R.; Gill, Philip E.; Abarbanel, Henry D.I.

    2008-01-01

    In verifying and validating models of nonlinear processes it is important to incorporate information from observations in an efficient manner. Using the idea of synchronization of nonlinear dynamical systems, we present a framework for connecting a data signal with a model in a way that minimizes the required coupling yet allows the estimation of unknown parameters in the model. The need to evaluate unknown parameters in models of nonlinear physical, biophysical, and engineering systems occurs throughout the development of phenomenological or reduced models of dynamics. Our approach builds on existing work that uses synchronization as a tool for parameter estimation. We address some of the critical issues in that work and provide a practical framework for finding an accurate solution. In particular, we show the equivalence of this problem to that of tracking within an optimal control framework. This equivalence allows the application of powerful numerical methods that provide robust practical tools for model development and validation

  7. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  8. Comparison Between Two Methods for Estimating the Vertical Scale of Fluctuation for Modeling Random Geotechnical Problems

    Science.gov (United States)

    Pieczyńska-Kozłowska, Joanna M.

    2015-12-01

    The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.

  9. Estimation of the Thermophysical Properties of the Soil together with Sensors' Positions by Inverse Problem

    OpenAIRE

    Mansour , Salwa; Canot , Edouard; Delannay , Renaud; March , Ramiro J.; Cordero , José Agustin; Carlos Ferreri , Juan

    2015-01-01

    The report is basically divided into two main parts. In the first part, we introduce a numerical strategy in both 1D and 3D axisymmetric coordinate systems to estimate the thermophysical properties of the soil (volumetric heat capacity (ρC)s , thermal conductivity λs and porosity φ) of a saturated porous medium where a phase change problem (liquid/vapor) appears due to intense heating from above. Usually φ is the true porosity, however when the soil is not saturated (which should concern most...

  10. An Alternative Route to Teaching Fraction Division: Abstraction of Common Denominator Algorithm

    Directory of Open Access Journals (Sweden)

    İsmail Özgür ZEMBAT

    2015-06-01

    Full Text Available From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting curriculum for fraction division from use of invert and multiply algorithm as a basis to the use of common denominator algorithm as a basis. This was accomplished with the analysis of learning of two prospective elementary teachers being an illustration of how to realize those conceptual affordances. In doing so, the article proposes an instructional sequence and details it by referring to both the (mathematical and pedagogical advantages and the disadvantages. As a result, this algorithm has a conceptual basis depending on basic operations of partitioning, unitizing, and counting, which make it accessible to learners. Also, when participants are encouraged to construct this algorithm based on their work with diagrams, common denominator algorithm formalizes the work that they do with diagrams.

  11. An alternative route to teaching fraction division: Abstraction of common denominator algorithm

    Directory of Open Access Journals (Sweden)

    İsmail Özgür Zembat

    2015-07-01

    Full Text Available From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting curriculum for fraction division from use of invert and multiply algorithm as a basis to the use of common denominator algorithm as a basis. This was accomplished with the analysis of learning of two prospective elementary teachers being an illustration of how to realize those conceptual affordances. In doing so, the article proposes an instructional sequence and details it by referring to both the (mathematical and pedagogical advantages and the disadvantages. As a result, this algorithm has a conceptual basis depending on basic operations of partitioning, unitizing, and counting, which make it accessible to learners. Also, when participants are encouraged to construct this algorithm based on their work with diagrams, common denominator algorithm formalizes the work that they do with diagrams.

  12. Estimating incidence of problem drug use using the Horwitz-Thompson estimator - A new approach applied to people who inject drugs in Oslo 1985-2008.

    Science.gov (United States)

    Amundsen, Ellen J; Bretteville-Jensen, Anne L; Kraus, Ludwig

    2016-01-01

    The trend in the number of new problem drug users per year (incidence) is the most important measure for studying the diffusion of problem drug use. Due to sparse data sources and complicated statistical models, estimation of incidence of problem drug use is challenging. The aim of this study is to widen the palette of available methods and data types for estimating incidence of problem drug use over time, and for identifying the trends. This study presents a new method of incidence estimation, applied to people who inject drugs (PWID) in Oslo. The method took into account the transition between different phases of drug use progression - active use, temporary cessation, and permanent cessation. The Horwitz-Thompson estimator was applied. Data included 16 cross-sectional samples of problem drug users who reported their onset of injecting drug use. We explored variation in results for selected probable scenarios of parameter variation for disease progression, as well as the stability of the results based on fewer years of cross-sectional samples. The method yielded incidence estimates of problem drug use, over time. When applied to people in Oslo who inject drugs, we found a significant reduction of incidence of 63% from 1985 to 2008. This downward trend was also present when the estimates were based on fewer surveys (five) and in the results of sensitivity analysis for likely scenarios of disease progression. This new method, which incorporates temporarily inactive problem drug users, may become a useful tool for estimating the incidence of problem drug use over time. The method may be less data intensive than other methods based on first entry to treatment and may be generalized to other groups of substance users. Further studies on drug use progression would improve the validity of the results. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. No common denominator: a review of outcome measures in IVF RCTs.

    Science.gov (United States)

    Wilkinson, Jack; Roberts, Stephen A; Showell, Marian; Brison, Daniel R; Vail, Andy

    2016-12-01

    Which outcome measures are reported in RCTs for IVF? Many combinations of numerator and denominator are in use, and are often employed in a manner that compromises the validity of the study. The choice of numerator and denominator governs the meaning, relevance and statistical integrity of a study's results. RCTs only provide reliable evidence when outcomes are assessed in the cohort of randomised participants, rather than in the subgroup of patients who completed treatment. Review of outcome measures reported in 142 IVF RCTs published in 2013 or 2014. Trials were identified by searching the Cochrane Gynaecology and Fertility Specialised Register. English-language publications of RCTs reporting clinical or preclinical outcomes in peer-reviewed journals in the period 1 January 2013 to 31 December 2014 were eligible. Reported numerators and denominators were extracted. Where they were reported, we checked to see if live birth rates were calculated correctly using the entire randomised cohort or a later denominator. Over 800 combinations of numerator and denominator were identified (613 in no more than one study). No single outcome measure appeared in the majority of trials. Only 22 (43%) studies reporting live birth presented a calculation including all randomised participants or only excluding protocol violators. A variety of definitions were used for key clinical numerators: for example, a consensus regarding what should constitute an ongoing pregnancy does not appear to exist at present. Several of the included articles may have been secondary publications. Our categorisation scheme was essentially arbitrary, so the frequencies we present should be interpreted with this in mind. The analysis of live birth denominators was post hoc. There is massive diversity in numerator and denominator selection in IVF trials due to its multistage nature, and this causes methodological frailty in the evidence base. The twin spectres of outcome reporting bias and analysis of non

  14. Closed-form kinetic parameter estimation solution to the truncated data problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Kadrmas, Dan J; Gullberg, Grant T

    2010-01-01

    In a dedicated cardiac single photon emission computed tomography (SPECT) system, the detectors are focused on the heart and the background is truncated in the projections. Reconstruction using truncated data results in biased images, leading to inaccurate kinetic parameter estimates. This paper has developed a closed-form kinetic parameter estimation solution to the dynamic emission imaging problem. This solution is insensitive to the bias in the reconstructed images that is caused by the projection data truncation. This paper introduces two new ideas: (1) it includes background bias as an additional parameter to estimate, and (2) it presents a closed-form solution for compartment models. The method is based on the following two assumptions: (i) the amount of the bias is directly proportional to the truncated activities in the projection data, and (ii) the background concentration is directly proportional to the concentration in the myocardium. In other words, the method assumes that the image slice contains only the heart and the background, without other organs, that the heart is not truncated, and that the background radioactivity is directly proportional to the radioactivity in the blood pool. As long as the background activity can be modeled, the proposed method is applicable regardless of the number of compartments in the model. For simplicity, the proposed method is presented and verified using a single compartment model with computer simulations using both noiseless and noisy projections.

  15. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    Science.gov (United States)

    Gao, Qian

    compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  18. Electrochemical estimation on the applicability of nickel plating to EAC problems in CRDM nozzle

    International Nuclear Information System (INIS)

    Oh, Si Hyoung; Hwang, Il Soon

    2002-01-01

    The applicability of nickel-plating to EAC problems in CRDM nozzle was estimated in the light of electrochemical aspect. The passive film growth law for nickel was improved to include oxide dissolution rate improving conventional point defect model to explain retarded passivation of plated nickel in PWR primary side water environment and compared with experimental data. According to this model, oxide growth and passivation current is closely related with oxide dissolution rate because steady state is made only if oxide formation and oxide destruction rate are same, from which oxide dissolution rate constant, k s , was quantitatively obtained utilizing experimental data. Commonly observed current-time behavior, i∝t m ,where m is different from 1 or 0.5, for passive film formation can be accounted for by virtue of enhanced oxide dissolution in high temperature aqueous environment

  19. Analysis of parameter estimation and optimization application of ant colony algorithm in vehicle routing problem

    Science.gov (United States)

    Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun

    2018-03-01

    Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.

  20. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  1. An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems

    International Nuclear Information System (INIS)

    Fazli, Roohallah; Nakhkash, Mansor

    2012-01-01

    This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions. (paper)

  2. On parameterization of the inverse problem for estimating aquifer properties using tracer data

    International Nuclear Information System (INIS)

    Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan

    2012-01-01

    We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.

  3. The solar neutrino problem

    Indian Academy of Sciences (India)

    to a research problem that now commands the attention of a large number of physicists ... the first comparison between theory and experiment was made. .... prior probability assigned to hypothesis А. The integration in the denominator is .... The key feature of figure 5, which is well known, is the marked reduction in the Be.

  4. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    Energy Technology Data Exchange (ETDEWEB)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl

    2016-09-15

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  5. Denominative Variation in the Terminology of Fauna and Flora: Cultural and Linguistic (ASymmetries

    Directory of Open Access Journals (Sweden)

    Sabrina de Cássia Martins

    2018-05-01

    Full Text Available The present work approaches the denominative variation in Terminology. In this way, it has as object of study the specialized lexical units in Portuguese language formed by at least one of the following color names: black, white, yellow, blue, orange, gray, green, brown, red, pink, violet, purple and indigo. The comparative analysis of this vocabulary among Portuguese, English and Italian languages was conducted considering two sub-areas of Biology: Botany, specifically Angiosperms (Monocotyledons and Eudicotyledons, and Zoology, exclusively Vertebrates (fish, amphibians, reptiles, birds and mammals. It will be described in the next pages how common names are created in these tree languages.

  6. A Curious Problem with Using the Colour Checker Dataset for Illuminant Estimation

    OpenAIRE

    Finlayson, Graham; Hemrit, Ghalia; Gijsenij, Arjan; Gehler, Peter

    2017-01-01

    In illuminant estimation, we attempt to estimate the RGB of the light. We then use this estimate on an image to correct for the light's colour bias. Illuminant estimation is an essential component of all camera reproduction pipelines. How well an illuminant estimation algorithm works is determined by how well it predicts the ground truth illuminant colour. Typically, the ground truth is the RGB of a white surface placed in a scene. Over a large set of images an estimation error is calculated ...

  7. Pollution Problem in River Kabul: Accumulation Estimates of Heavy Metals in Native Fish Species.

    Science.gov (United States)

    Ahmad, Habib; Yousafzai, Ali Muhammad; Siraj, Muhammad; Ahmad, Rashid; Ahmad, Israr; Nadeem, Muhammad Shahid; Ahmad, Waqar; Akbar, Nazia; Muhammad, Khushi

    2015-01-01

    The contamination of aquatic systems with heavy metals is affecting the fish population and hence results in a decline of productivity rate. River Kabul is a transcountry river originating at Paghman province in Afghanistan and inters in Khyber Pakhtunkhwa province of Pakistan and it is the major source of irrigation and more than 54 fish species have been reported in the river. Present study aimed at the estimation of heavy metals load in the fish living in River Kabul. Heavy metals including chromium, nickel, copper, zinc, cadmium, and lead were determined through atomic absorption spectrophotometer after tissue digestion by adopting standard procedures. Concentrations of these metals were recorded in muscles and liver of five native fish species, namely, Wallago attu, Aorichthys seenghala, Cyprinus carpio, Labeo dyocheilus, and Ompok bimaculatus. The concentrations of chromium, nickel, copper, zinc, and lead were higher in both of the tissues, whereas the concentration of cadmium was comparatively low. However, the concentration of metals was exceeding the RDA (Recommended Dietary Allowance of USA) limits. Hence, continuous fish consumption may create health problems for the consumers. The results of the present study are alarming and suggest implementing environmental laws and initiation of a biomonitoring program of the river.

  8. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    Science.gov (United States)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  9. Between generational denominations: what the academic narratives teach us about digital children and youth

    Directory of Open Access Journals (Sweden)

    Sandro Faccin Bortolazzo

    2017-06-01

    Full Text Available From the emergency of many generational denominations and the intense relationship of children and youth with the digital artifacts (tablets, smartphones, among others, this article - registered under the theoretical framework of Cultural Studies in Education - aims to investigate in what context and under which conditions the production of digital children and young people has been possible. The study presents three movements: an overview of the generation concept; the mapping of academic narratives; an analysis of how these narratives are implicated in producing a type of generation and a “digital” education. The theoretical referential is based on authors such as Feixa and Leccardi, Tapscott, Prensky, Carr and Buckingham. The narratives also point out the beneits and dangers of technological immersion, which has permeated the convocation for the use of technological devices in school spaces.

  10. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  11. Caring for patients of Islamic denomination: Critical care nurses' experiences in Saudi Arabia.

    Science.gov (United States)

    Halligan, Phil

    2006-12-01

    To describe the critical care nurses' experiences in caring for patients of Muslim denomination in Saudi Arabia. Caring is known to be the essence of nursing but many health-care settings have become more culturally diverse. Caring has been examined mainly in the context of Western cultures. Muslims form one of the largest ethnic minority communities in Britain but to date, empirical studies relating to caring from an Islamic perspective is not well documented. Research conducted within the home of Islam would provide essential truths about the reality of caring for Muslim patients. Phenomenological descriptive. Methods. Six critical care nurses were interviewed from a hospital in Saudi Arabia. The narratives were analysed using Colaizzi's framework. The meaning of the nurses' experiences emerged as three themes: family and kinship ties, cultural and religious influences and nurse-patient relationship. The results indicated the importance of the role of the family and religion in providing care. In the process of caring, the participants felt stressed and frustrated and they all experienced emotional labour. Communicating with the patients and the families was a constant battle and this acted as a further stressor in meeting the needs of their patients. The concept of the family and the importance and meaning of religion and culture were central in the provision of caring. The beliefs and practices of patients who follow Islam, as perceived by expatriate nurses, may have an effect on the patient's health care in ways that are not apparent to many health-care professionals and policy makers internationally. Readers should be prompted to reflect on their clinical practice and to understand the impact of religious and cultural differences in their encounters with patients of Islam denomination. Policy and all actions, decisions and judgments should be culturally derived.

  12. Estimating Rates of Psychosocial Problems in Urban and Poor Children with Sickle Cell Anemia.

    Science.gov (United States)

    Barbarin, Oscar A.; And Others

    1994-01-01

    Examined adjustment problems for children and adolescents with sickle cell anemia (SCA). Parents provided information on social, emotional, academic, and family adjustment of 327 children with SCA. Over 25% of children had emotional adjustment problems in form of internalizing symptoms (anxiety and depression); at least 20% had problems related to…

  13. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  14. Explaining behavior change after genetic testing: the problem of collinearity between test results and risk estimates.

    Science.gov (United States)

    Fanshawe, Thomas R; Prevost, A Toby; Roberts, J Scott; Green, Robert C; Armstrong, David; Marteau, Theresa M

    2008-09-01

    This paper explores whether and how the behavioral impact of genotype disclosure can be disentangled from the impact of numerical risk estimates generated by genetic tests. Secondary data analyses are presented from a randomized controlled trial of 162 first-degree relatives of Alzheimer's disease (AD) patients. Each participant received a lifetime risk estimate of AD. Control group estimates were based on age, gender, family history, and assumed epsilon4-negative apolipoprotein E (APOE) genotype; intervention group estimates were based upon the first three variables plus true APOE genotype, which was also disclosed. AD-specific self-reported behavior change (diet, exercise, and medication use) was assessed at 12 months. Behavior change was significantly more likely with increasing risk estimates, and also more likely, but not significantly so, in epsilon4-positive intervention group participants (53% changed behavior) than in control group participants (31%). Intervention group participants receiving epsilon4-negative genotype feedback (24% changed behavior) and control group participants had similar rates of behavior change and risk estimates, the latter allowing assessment of the independent effects of genotype disclosure. However, collinearity between risk estimates and epsilon4-positive genotypes, which engender high-risk estimates, prevented assessment of the independent effect of the disclosure of an epsilon4 genotype. Novel study designs are proposed to determine whether genotype disclosure has an impact upon behavior beyond that of numerical risk estimates.

  15. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    Science.gov (United States)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure

  16. Lagged life cycle structures for food products: Their role in global marketing, their determinants and some problems in their estimation

    DEFF Research Database (Denmark)

    Baadsgaard, Allan; Gede, Mads Peter; Grunert, Klaus G.

    cycles for different product categories may be lagged (type II lag) because changes in economic and other factors will result in demands for different products. Identifying lagged life cycle structures major importance in global marketing of food products. The problems in arriving at such estimates...

  17. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  18. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  19. FAST LABEL: Easy and efficient solution of joint multi-label and estimation problems

    KAUST Repository

    Sundaramoorthi, Ganesh; Hong, Byungwoo

    2014-01-01

    that plague local solutions. Further, in comparison to global methods for the multi-label problem, the method is more efficient and it is easy for a non-specialist to implement. We give sample Matlab code for the multi-label Chan-Vese problem in this paper

  20. Recognition of Action as a Bayesian Parameter Estimation Problem over Time

    DEFF Research Database (Denmark)

    Krüger, Volker

    2007-01-01

    In this paper we will discuss two problems related to action recognition: The first problem is the one of identifying in a surveillance scenario whether a person is walking or running and in what rough direction. The second problem is concerned with the recovery of action primitives from observed...... complex actions. Both problems will be discussed within a statistical framework. Bayesian propagation over time offers a framework to treat likelihood observations at each time step and the dynamics between the time steps in a unified manner. The first problem will be approached as a patter recognition...... of the Bayesian framework for action recognition and round up our discussion....

  1. 29 CFR 4211.4 - Contributions for purposes of the numerator and denominator of the allocation fractions.

    Science.gov (United States)

    2010-07-01

    ... of the allocation fractions. 4211.4 Section 4211.4 Labor Regulations Relating to Labor (Continued... denominator of the allocation fractions. Each of the allocation fractions used in the presumptive, modified... five-year period. (a) The numerator of the allocation fraction, with respect to a withdrawing employer...

  2. The Impact of Denominational Affiliation on Organizational Sense of Belonging and Commitment of Adjunct Faculty at Bible Colleges and Universities

    Science.gov (United States)

    Pilieci, Kimberly M.

    2016-01-01

    The majority of faculty in higher education, including secular and biblical institutions, are adjunct faculty. The literature suggests that adjunct faculty are less effective and satisfied, and have weaker organizational sense of belonging (OSB) and affective organizational commitment (AOC). Denominational affiliation (DA) and religious commitment…

  3. "Faith of Our Fathers" -- Lesbian, Gay and Bisexual Teachers' Attitudes towards the Teaching of Religion in Irish Denominational Primary Schools

    Science.gov (United States)

    Fahie, Declan

    2017-01-01

    Owing to a variety of complex historical and socio-cultural factors, the Irish education system remains heavily influenced by denominational mores and values [Ferriter, D. 2012. "Occasions of Sin: Sex & Society in Modern Ireland." London: Profile Books], particularly those of the Roman Catholic Church [O'Toole, B. 2015.…

  4. Initiative for international cooperation of researchers and breeders related to determination and denomination of cucurbit powdery mildew races

    Science.gov (United States)

    Cucurbit powdery mildew (CPM) is caused most frequently by two obligate erysiphaceous ectoparasites, Golovinomyces orontii s.l. and Podosphaera xanthii, that are highly variable in virulence. Various independent systems of CPM race determination and denomination cause a chaotic situation in cucurbit...

  5. Estimation of presampling MTF in CR systems by using direct fluorescence and its problems

    International Nuclear Information System (INIS)

    Ono, Kitahei; Inatsu, Hiroshi; Harao, Mototsugu; Itonaga, Haruo; Miyamoto, Hideyuki

    2001-01-01

    We proposed a method for practical estimation of the presampling modulation transfer function (MTF) of a computed radiography (CR) system by using the MTFs of an imaging plate and the sampling aperture. The MTFs of three imaging plates (GP-25, ST-VN, and RP-1S) with different photostimulable phosphors were measured by using direct fluorescence (the light emitted instantaneously by x-ray exposure), and the presampling MTFs were estimated from these imaging plate MTFs and the sampling aperture MTF. Our results indicated that for imaging plate RP-1S the measured presampling MTF was significantly superior to the estimated presampling MTF at any spatial frequency. This was because the estimated presampling MTF was degraded by the diffusion of direct fluorescence in the protective layer of the imaging plate's surface. Therefore, when the presampling MTF of the imaging plate with a thick protective layer is estimated, correction for the thickness of the protective layer should be carried out. However, the estimated presampling MTF of imaging plates with a thin protective layer were almost the same as the measured presampling MTF, except in the high spatial frequency range. Therefore, we consider this estimation method to be useful and practical, because the spatial resolution property of a CR system can be obtained simply from the imaging plate MTF measured with direct fluorescence. (author)

  6. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    International Nuclear Information System (INIS)

    Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.

    2008-01-01

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study

  7. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.

    2012-01-01

    for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods

  8. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  9. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    Science.gov (United States)

    Phanomchoeng, Gridsada

    A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is

  10. A State-of-the-Art Review of the Sensor Location, Flow Observability, Estimation, and Prediction Problems in Traffic Networks

    Directory of Open Access Journals (Sweden)

    Enrique Castillo

    2015-01-01

    Full Text Available A state-of-the-art review of flow observability, estimation, and prediction problems in traffic networks is performed. Since mathematical optimization provides a general framework for all of them, an integrated approach is used to perform the analysis of these problems and consider them as different optimization problems whose data, variables, constraints, and objective functions are the main elements that characterize the problems proposed by different authors. For example, counted, scanned or “a priori” data are the most common data sources; conservation laws, flow nonnegativity, link capacity, flow definition, observation, flow propagation, and specific model requirements form the most common constraints; and least squares, likelihood, possible relative error, mean absolute relative error, and so forth constitute the bases for the objective functions or metrics. The high number of possible combinations of these elements justifies the existence of a wide collection of methods for analyzing static and dynamic situations.

  11. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  12. Lamé Parameter Estimation from Static Displacement Field Measurements in the Framework of Nonlinear Inverse Problems

    DEFF Research Database (Denmark)

    Hubmer, Simon; Sherina, Ekaterina; Neubauer, Andreas

    2018-01-01

    . The main result of this paper is the verification of a nonlinearity condition in an infinite dimensional Hilbert space context. This condition guarantees convergence of iterative regularization methods. Furthermore, numerical examples for recovery of the Lam´e parameters from displacement data simulating......We consider a problem of quantitative static elastography, the estimation of the Lam´e parameters from internal displacement field data. This problem is formulated as a nonlinear operator equation. To solve this equation, we investigate the Landweber iteration both analytically and numerically...... a static elastography experiment are presented....

  13. Estimation of surface temperature by using inverse problem. Part 1. Steady state analyses of two-dimensional cylindrical system

    International Nuclear Information System (INIS)

    Takahashi, Toshio; Terada, Atsuhiko

    2006-03-01

    In the corrosive process environment of thermochemical hydrogen production Iodine-Sulfur process plant, there is a difficulty in the direct measurement of surface temperature of the structural materials. An inverse problem method can effectively be applied for this problem, which enables estimation of the surface temperature using the temperature data at the inside of structural materials. This paper shows analytical results of steady state temperature distributions in a two-dimensional cylindrical system cooled by impinging jet flow, and clarifies necessary order of multiple-valued function from the viewpoint of engineeringly satisfactory precision. (author)

  14. Problems and solutions in the estimation of genetic risks from radiation and chemicals

    International Nuclear Information System (INIS)

    Russell, W.L.

    1980-01-01

    Extensive investigations with mice on the effects of various physical and biological factors, such as dose rate, sex and cell stage, on radiation-induced mutation have provided an evaluation of the genetics hazards of radiation in man. The mutational results obtained in both sexes with progressive lowering of the radiation dose rate have permitted estimation of the mutation frequency expected under the low-level radiation conditions of most human exposure. Supplementing the studies on mutation frequency are investigations on the phenotypic effects of mutations in mice, particularly anatomical disorders of the skeleton, which allow an estimation of the degree of human handicap associated with the occurrence of parallel defects in man. Estimation of the genetic risk from chemical mutagens is much more difficult, and the research is much less advanced. Results on transmitted mutations in mice indicate a poor correlation with mutation induction in non-mammalian organisms

  15. Limitations and problems in deriving risk estimates for low-level radiation exposure

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1981-01-01

    Some of the problems in determining the cancer risk of low-level radiation from studies of exposed groups are reviewed and applied to the study of Hanford workers by Mancuso, Stewart, and Kneale. Problems considered are statistical limitations, variation of cancer rates with geography and race, the ''healthy worker effect,'' calendar year and age variation of cancer mortality, choosing from long lists, use of proportional mortality rates, cigarette smoking-cancer correlations, use of averages to represent data distributions, ignoring other data, and correlations between radiation exposure and other factors that may cause cancer. The current status of studies of the Hanford workers is reviewed

  16. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    Science.gov (United States)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  17. YouTube Fridays: Student Led Development of Engineering Estimate Problems

    Science.gov (United States)

    Liberatore, Matthew W.; Vestal, Charles R.; Herring, Andrew M.

    2012-01-01

    YouTube Fridays devotes a small fraction of class time to student-selected videos related to the course topic, e.g., thermodynamics. The students then write and solve a homework-like problem based on the events in the video. Three recent pilots involving over 300 students have developed a database of videos and questions that reinforce important…

  18. Asymptotic eigenvalue estimates for a Robin problem with a large parameter

    Czech Academy of Sciences Publication Activity Database

    Exner, Pavel; Minakov, A.; Parnovski, L.

    2014-01-01

    Roč. 71, č. 2 (2014), s. 141-156 ISSN 0032-5155 R&D Projects: GA ČR(CZ) GA14-06818S Institutional support: RVO:61389005 Keywords : Laplacian * Robin problem * eigenvalue asymptotics Subject RIV: BE - Theoretical Physics Impact factor: 0.250, year: 2014

  19. Stability Estimates for h-p Spectral Element Methods for Elliptic Problems

    NARCIS (Netherlands)

    Dutt, Pravir; Tomar, S.K.; Kumar, B.V. Rathish

    2002-01-01

    In a series of papers of which this is the first we study how to solve elliptic problems on polygonal domains using spectral methods on parallel computers. To overcome the singularities that arise in a neighborhood of the corners we use a geometrical mesh. With this mesh we seek a solution which

  20. Probabilistic formulation of estimation problems for a class of Hamilton-Jacobi equations

    KAUST Repository

    Hofleitner, Aude; Claudel, Christian G.; Bayen, Alexandre M.

    2012-01-01

    This article presents a method for deriving the probability distribution of the solution to a Hamilton-Jacobi partial differential equation for which the value conditions are random. The derivations lead to analytical or semi-analytical expressions of the probability distribution function at any point in the domain in which the solution is defined. The characterization of the distribution of the solution at any point is a first step towards the estimation of the parameters defining the random value conditions. This work has important applications for estimation in flow networks in which value conditions are noisy. In particular, we illustrate our derivations on a road segment with random capacity reductions. © 2012 IEEE.

  1. Probabilistic formulation of estimation problems for a class of Hamilton-Jacobi equations

    KAUST Repository

    Hofleitner, Aude

    2012-12-01

    This article presents a method for deriving the probability distribution of the solution to a Hamilton-Jacobi partial differential equation for which the value conditions are random. The derivations lead to analytical or semi-analytical expressions of the probability distribution function at any point in the domain in which the solution is defined. The characterization of the distribution of the solution at any point is a first step towards the estimation of the parameters defining the random value conditions. This work has important applications for estimation in flow networks in which value conditions are noisy. In particular, we illustrate our derivations on a road segment with random capacity reductions. © 2012 IEEE.

  2. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  3. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  4. Spectral and parameter estimation problems arising in the metrology of high performance mirror surfaces

    International Nuclear Information System (INIS)

    Church, E.L.; Takacs, P.Z.

    1986-04-01

    The accurate characterization of mirror surfaces requires the estimation of two-dimensional distribution functions and power spectra from trend-contaminated profile measurements. The rationale behind this, and our measurement and processing procedures, are described. The distinction between profile and area spectra is indicated, and since measurements often suggest inverse-power-law forms, a discussion of classical and fractal models of processes leading to these forms is included. 9 refs

  5. The mixed boundary value problem, Krein resolvent formulas and spectral asymptotic estimates

    DEFF Research Database (Denmark)

    Grubb, Gerd

    2011-01-01

    For a second-order symmetric strongly elliptic operator A on a smooth bounded open set in Rn, the mixed problem is defined by a Neumann-type condition on a part Σ+ of the boundary and a Dirichlet condition on the other part Σ−. We show a Kreĭn resolvent formula, where the difference between its...... to the area of Σ+, in the case where A is principally equal to the Laplacian...

  6. PROBLEMS OF ICT-BASED TOOLS ESTIMATION IN THE CONTEXT OF INFORMATION SOCIETY FORMATION

    Directory of Open Access Journals (Sweden)

    M. Shyshkina

    2012-03-01

    Full Text Available The article describes the problems of improvement of quality of implementation and use of e-learning tools which arise in terms of increasing quality and accessibility of education. It is determined that those issues are closely linked to specific scientific and methodological approaches to evaluation of quality, selection and use of ICT-based tools in view of emergence of promising information technological platforms of these resources implementation and delivery.

  7. Existence and Estimates of Positive Solutions for Some Singular Fractional Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Habib Mâagli

    2014-01-01

    fractional boundary value problem:Dαu(x=−a(xuσ(x, x∈(0,1 with the conditions limx→0+⁡x2−αu(x=0, u(1=0, where 1<α≤2, σ∈(−1,1, and a is a nonnegative continuous function on (0,1 that may be singular at x=0 or x=1. We also give the global behavior of such a solution.

  8. Problems and prospects for the future career: “Public and municipal administration” students’ estimates

    OpenAIRE

    V S Muhametzhanova; E A Ivlev

    2016-01-01

    In recent years, in Russia both official and media discourses have emphasized the need to modernize, optimize and reform the institutions of public and municipal administration as basic means of socio-economic and political development of the country. Unfortunately, quite often different organizational forms within the system of social management encounter not only institutional or objective obstacles, but also subjective problems determined by the “quality” of human resources. For decades, t...

  9. A Monte Carlo estimation of the marginal distributions in a problem of probabilistic dynamics

    International Nuclear Information System (INIS)

    Labeau, P.E.

    1996-01-01

    Modelling the effect of the dynamic behaviour of a system on its PSA study leads, in a Markovian framework, to a development at first order of the Chapman-Kolmogorov equation, whose solutions are the probability densities of the problem. Because of its size, there is no hope of solving directly these equations in realistic circumstances. We present in this paper a biased simulation giving the marginals and compare different ways of speeding up the integration of the equations of the dynamics

  10. [Problems associated with age estimation of underage persons who appear in child pornography materials].

    Science.gov (United States)

    Łabecka, Marzena; Lorkiewicz-Muszyńska, Dorota; Jarzabek-Bielecka, Grazyna

    2011-01-01

    Among opinions issued by the Forensic Medicine Department, Medical Science University in Poznan, in the last six years, there are opinions concerning age estimation in child pornography materials. The issue subject to research is indicating persons under the age of 15 years in pornographic materials, since possession of pornographic materials featuring underage persons is considered a crime and is subject to article 202 of the Penal Code. The estimation of the age of teenagers based on secondary and tertiary sexual characteristics is increasingly more difficult and the available data in professional literature regarding the standard time of development differ among various authors of such studies. In the report, an attempt has been made at determining the agreement regarding different characteristics in the data included in the Tanner's scale, which has been modified to accommodate the research done on persons registered by electronic means. The modified scale, which up to now has been used in research of registered subjects in classified public prosecutors' materials, has been employed in children seen in a pediatric outpatient department. The goal has been a comparison of the outcome of the research to prove its usefulness so that in the future, the modified scale could be used as a research tool in estimation of age of persons appearing in pornography materials. medical forms of 205 children seen in a pediatric outpatient department, based on the scale created by the present authors us and later processed using Excel.

  11. Support Vector Regression-Based Adaptive Divided Difference Filter for Nonlinear State Estimation Problems

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2014-01-01

    Full Text Available We present a support vector regression-based adaptive divided difference filter (SVRADDF algorithm for improving the low state estimation accuracy of nonlinear systems, which are typically affected by large initial estimation errors and imprecise prior knowledge of process and measurement noises. The derivative-free SVRADDF algorithm is significantly simpler to compute than other methods and is implemented using only functional evaluations. The SVRADDF algorithm involves the use of the theoretical and actual covariance of the innovation sequence. Support vector regression (SVR is employed to generate the adaptive factor to tune the noise covariance at each sampling instant when the measurement update step executes, which improves the algorithm’s robustness. The performance of the proposed algorithm is evaluated by estimating states for (i an underwater nonmaneuvering target bearing-only tracking system and (ii maneuvering target bearing-only tracking in an air-traffic control system. The simulation results show that the proposed SVRADDF algorithm exhibits better performance when compared with a traditional DDF algorithm.

  12. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  13. Facing a Problem of Electrical Energy Quality in Ship Networks-measurements, Estimation, Control

    Institute of Scientific and Technical Information of China (English)

    Tomasz Tarasiuk; Janusz Mindykowski; Xiaoyan Xu

    2003-01-01

    In this paper, electrical energy quality and its indices in ship electric networks are introduced, especially the meaning of electrical energy quality terms in voltage and active and reactive power distribution indices. Then methods of measurement of marine electrical energy indices are introduced in details and a microprocessor measurement-diagnosis system with the function of measurement and control is designed. Afterwards, estimation and control of electrical power quality of marine electrical power networks are introduced. And finally, according to the existing method of measurement and control of electrical power quality in ship power networks, the improvement of relative method is proposed.

  14. A FEM approximation of a two-phase obstacle problem and its a posteriori error estimate

    Czech Academy of Sciences Publication Activity Database

    Bozorgnia, F.; Valdman, Jan

    2017-01-01

    Roč. 73, č. 3 (2017), s. 419-432 ISSN 0898-1221 R&D Projects: GA ČR(CZ) GF16-34894L; GA MŠk(CZ) 7AMB16AT015 Institutional support: RVO:67985556 Keywords : A free boundary problem * A posteriori error analysis * Finite element method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.531, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/valdman-0470507.pdf

  15. Topics on the problem of genetic risk estimates health research foundation

    International Nuclear Information System (INIS)

    Nakai, Sayaka

    1995-01-01

    Reanalysis of the data on untoward pregnancy outcome (UPO) for atomic bomb survivors was undertaken based on following current results of cytogenetic studies obtained in Japan: 1) Human gametes were very sensitive to the production of chromosome aberrations either spontaneous or radiation induced origin. 2) The shape of dose-response relations against to radiations showed humped curve at relatively low dose-range below 3Gy. 3) Existence of very severe selection to the embryo having chromosome aberrations represented during fetus development before the birth. It was concluded that 1) Humped dose-response model was more fitted than the linear dose model. 2) Regression coefficient for the slope of UPO at low doses derived from humped dose model was about 6 times more higher than the previous value based on linear model. 3) Risk factor for genetic detriment in term of UPO was estimated as 0.015/Gy under the condition exposed radiation below 1Gy. 4) It was difficult to find out positive evidence supporting the view which is given by Neel et al. that present estimates of doubling dose based on mouse data thought to be underestimated figure. (author)

  16. Estimating problem drinking among community pharmacy customers: what did pharmacists think of the method?

    Science.gov (United States)

    Sheridan, Janie; Smart, Ros; McCormick, Ross

    2010-10-01

    Community pharmacists have successfully been involved in brief interventions in many areas of health, and also provide services to substance misusers. There has been recent interest in community pharmacists providing screening and brief interventions (SBI) to problem drinkers. The aim of this study was to develop a method for measuring prevalence of risky drinking among community pharmacy customers and to explore acceptability of this method to participating pharmacists. Forty-three pharmacies (from 80 randomly selected) in New Zealand agreed to participate in data collection. On a set, single, randomly allocated day during one week, pharmacies handed out questionnaires about alcohol consumption, and views on pharmacists providing SBI, to their customers. At the end of the data collection period semi-structured telephone interviews were carried out with participating pharmacists. Pharmacists were generally positive about the way the study was carried out, the support and materials they were provided with, and the ease of the data collection process. They reported few problems with customers and the majority of pharmacists would participate again. The method developed successfully collected data from customers and was acceptable to participating pharmacists. This method can be adapted to collecting data on prevalence of other behaviours or medical conditions and assessing customer views on services. © 2010 The Authors. IJPP © 2010 Royal Pharmaceutical Society of Great Britain.

  17. On methodical problems in estimating geological temperature and time from measurements of fission tracks in apatite

    International Nuclear Information System (INIS)

    Jonckheere, R.

    2003-01-01

    The results of apatite fission-track modelling are only as accurate as the method, and depend on the assumption that the processes involved in the annealing of fossil tracks over geological times are the same as those responsible for the annealing of induced fission tracks in laboratory experiments. This has hitherto been assumed rather than demonstrated. The present critical discussion identifies a number of methodical problems from an examination of the available data on age standards, borehole samples and samples studied in the framework of geological investigations. These problems are related to low- ( 60 deg. C) annealing on a geological timescale and to the procedures used for calculating temperature-time paths from the fission-track data. It is concluded that it is not established that the relationship between track length and track density and the appearance of unetchable gaps, observed in laboratory annealing experiments on induced tracks, can be extrapolated to the annealing of fossil tracks on a geological timescale. This in turn casts doubt on the central principle of equivalent time. That such uncertainties still exist is in no small part due to an insufficient understanding of the formation, structure and properties of fission tracks at the atomic scale and to a lack of attention to the details of track revelation. The methodical implications of discrepancies between fission track results and the independent geological evidence are rarely considered. This presents a strong case for the re-involvement of track physicists in fundamental fission track research

  18. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  19. Problems of an expert estimation and importance standardization of the radiating control in Republic of Kazakhstan

    International Nuclear Information System (INIS)

    Baybolov, S.M.; Baygogy, G.O.; Machatova, R.S.

    1999-01-01

    Radioecological condition in Republic of Kazakhstan is one of heaviest and sharp. The information on a condition of an environment many decades was confidential in our former country. A huge industrial complex and heavy industry, insensitively polluting an environment, under state protection against security regulation have destroyed an environmental nature. The long-term nuclear tests in ranges located in territory of Kazakhstan, platform with radioactive departures, mold boards mining-ore of developments (manufactures) operating nuclear power plants, platform for military purposes, dumps and emission of the processing enterprises - everyone are a source of dispersion of radioactive products of division on external environment and infect it. Ground and earth water are ideal environments for commutative accumulation of radioactive substances (radionuclides as Sr-90, Cs-137). Pu-239 strongly going in the top layers of ground and pass in food of a circuit, causing biological action to all alive. Till now there is no card of information system. register of radiation conditions around of ranges, the examination and estimation of the control of food products under the international standards is not adjusted. (author)

  20. Genetic parameter and breeding value estimation of donkeys' problem-focused coping styles.

    Science.gov (United States)

    Navas González, Francisco Javier; Jordana Vidal, Jordi; León Jurado, José Manuel; Arando Arbulu, Ander; McLean, Amy Katherine; Delgado Bermejo, Juan Vicente

    2018-05-12

    Donkeys are recognized therapy or leisure-riding animals. Anecdotal evidence has suggested that more reactive donkeys or those more easily engaging flight mechanisms tend to be easier to train compared to those displaying the natural donkey behaviour of fight. This context brings together the need to quantify such traits and to genetically select donkeys displaying a neutral reaction during training, because of its implication with handler/rider safety and trainability. We analysed the scores for coping style traits from 300 Andalusian donkeys from 2013 to 2015. Three scales were applied to describe donkeys' response to 12 stimuli. Genetic parameters were estimated using multivariate models with year, sex, husbandry system and stimulus as fixed effects and age as a linear and quadratic covariable. Heritabilities were moderate, 0.18 ± 0.020 to 0.21 ± 0.021. Phenotypic correlations between intensity and mood/emotion or response type were negative and moderate (-0.21 and -0.25, respectively). Genetic correlations between the same variables were negative and moderately high (-0.46 and -0.53, respectively). Phenotypic and genetic correlations between mood/emotion and response type were positive and high (0.92 and 0.95, respectively). Breeding values enable selection methods that could lead to endangered breed preservation and genetically selecting donkeys for the uses that they may be most suitable. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Characterization of grape seed oil from wines with protected denomination of origin (PDO from Spain

    Directory of Open Access Journals (Sweden)

    Bada, J. C.

    2015-09-01

    Full Text Available The aim of this study was to determine the composition and characteristics of red grape seed oils (Vitis vinifera L from wines with protected denomination of origin (PDO from Spain. Eight representative varieties of grape seed oils from the Spanish wine Ribera del Duero (Tempranillo, Toro (Tempranillo, Rioja (Garnacha, Valencia (Tempranillo and Cangas (Mencia, Carrasquín, Albarín and Verdejo were studied. The oil content of the seeds ranged from 13.89 to 10.18%, and the moisture was similar for all the seeds. Linoleic acid was the most abundant fatty acid in all samples, representing around 78%, followed by oleic acid with a concentration close 16%, the degree of unsaturation in the grape seed oil was over 90%. β-sitosterol and α-tocopherol were the main sterol and tocopherol, reaching values of 77.31% and 3.82 mg·100 g−1 of oil, respectively. In relation to the tocotrienols, α-tocotrienol was the main tocotrienol and accounted for 13.18 mg·100 g−1 of oil.El objetivo de este estudio consistió en determinar la composición y características de aceites de semillas de uvas rojas (Vitis vinifera L de vinos con denominación de origen protegida (DOP de España. Ocho variedades representativas de aceites de semillas de uvas españolas Ribera del Duero (Tempranillo, Toro (Tempranillo, Rioja (Garnacha, Valencia (Tempranillo y Cangas (Mencia, Carrasquín, Albarín y Verdejo fueron estudiadas. Los contenidos en aceite de las semillas oscilaron entre 13.89 y 10.18%, la humedad fué similar para todas las semillas. El contenido en ácido linoléico fué alto en todos los aceites alcanzando un valor del 78%, seguido del ácido oléico con una concentración cercana al 16%, registrando un grado total de insaturación del 90%. b-sitosterol y α-tocoferol fué el principal esterol y tocoferol, alcanzado niveles del 77.31% y de un 3.82 mg·100 g−1 de aceite respectivamente. En relación a los tocotrienoles, α-tocotrienol fué el mayoritario con

  2. A meta-regression analysis of 41 Australian problem gambling prevalence estimates and their relationship to total spending on electronic gaming machines.

    Science.gov (United States)

    Markham, Francis; Young, Martin; Doran, Bruce; Sugden, Mark

    2017-05-23

    Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM) and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs) and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by the models (I 2  ≥ 0.97; R 2  ≤ 0.01). The

  3. A meta-regression analysis of 41 Australian problem gambling prevalence estimates and their relationship to total spending on electronic gaming machines

    Directory of Open Access Journals (Sweden)

    Francis Markham

    2017-05-01

    Full Text Available Abstract Background Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. Methods A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Results Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by

  4. On the problem of negative dissipation of fast waves at the fundamental ion cyclotron resonance and the accuracy of absorption estimates

    International Nuclear Information System (INIS)

    Castejon, F.; Pavlov, S.S.; Swanson, D. G.

    2002-01-01

    Negative dissipation appears when ion cyclotron resonance (ICR) heating at first harmonic in a thermal plasma is estimated using some numerical schemes. The causes of the appearance of such a problem are investigated analytically and numerically in this work showing that the problem is connected with the accuracy with which the absorption coefficient at the first ICR harmonic is estimated. The corrections for the absorption estimation are presented for the case of quasiperpendicular propagation of fast wave in this frequency range. A method to solve the problem of negative dissipation is presented and, as a result, an enhancement of absorption is found for reactor-size plasmas

  5. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  6. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  7. Canonical resolution of the multiplicity problem for U(3): an explicit and complete constructive solution

    International Nuclear Information System (INIS)

    Biedenharn, L.C.; Lohe, M.A.; Louck, J.D.

    1975-01-01

    The multiplicity problem for tensor operators in U(3) has a unique (canonical) resolution which is utilized to effect the explicit construction of all U(3) Wigner and Racah coefficients. Methods are employed which elucidate the structure of the results; in particular, the significance of the denominator functions entering the structure of these coefficients, and the relation of these denominator functions to the null space of the canonical tensor operators. An interesting feature of the denominator functions is the appearance of new, group theoretical, polynomials exhibiting several remarkable and quite unexpected properties. (U.S.)

  8. The impact of exchange rate EUR/USD on the rate of return of bond investments denominated in US dollar from the point of view of euro investor

    Directory of Open Access Journals (Sweden)

    Oldřich Šoba

    2009-01-01

    Full Text Available Investment opportunities into foreign curruncies financial assets are rising because of financial markets globalization, financial markets integration and evolution of modern information technologies. The currency risk relates to these cases when investor converts cash from and into domestic currency. The currency risk is determined by unexcepeted change of exchange rate (currency of financial asset denomination / investor’s domestic currency during duration of the investment.Objective of the paper is quantification and analysis of exchange rate EUR/USD impact on the rate of return of bond investments denominated in US dollar from the point of view of a euro investor for investment horizons of different length.The analysis is realized for following investment horizons: 1 year, 2 years, 3 years, 5 years, 7 years, 10 year and 12 year. Complementary investment horizons are: month and 15 year. Bond investments denominated just US dollar are represented by investments into ING bond unit trust in period December 1989–December 2007. The unit trust invests into bonds with high rating (for example governmants bonds etc.. These bonds are denominated in USD only. Methodology of the analysis is based on quantification of proportion of exchange rate EUR/USD impact on the rate of return of bond investment denominated in USD. The share is based on basic piece of knowledge of the uncovered interest rate parity.

  9. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  10. General problems of metrology and indirect measuring in cardiology: error estimation criteria for indirect measurements of heart cycle phase durations

    Directory of Open Access Journals (Sweden)

    Konstantine K. Mamberger

    2012-11-01

    Full Text Available Aims This paper treats general problems of metrology and indirect measurement methods in cardiology. It is aimed at an identification of error estimation criteria for indirect measurements of heart cycle phase durations. Materials and methods A comparative analysis of an ECG of the ascending aorta recorded with the use of the Hemodynamic Analyzer Cardiocode (HDA lead versus conventional V3, V4, V5, V6 lead system ECGs is presented herein. Criteria for heart cycle phase boundaries are identified with graphic mathematical differentiation. Stroke volumes of blood SV calculated on the basis of the HDA phase duration measurements vs. echocardiography data are compared herein. Results The comparative data obtained in the study show an averaged difference at the level of 1%. An innovative noninvasive measuring technology originally developed by a Russian R & D team offers measuring stroke volume of blood SV with a high accuracy. Conclusion In practice, it is necessary to take into account some possible errors in measurements caused by hardware. Special attention should be paid to systematic errors.

  11. Some Notes About Medical Vocabulary in 18th Century New Spain: Technical and Colloquial Words for the Denomination of Illnesses

    Directory of Open Access Journals (Sweden)

    José Luis RAMÍREZ LUENGO

    2016-06-01

    Full Text Available Whereas the 18th Century medical vocabulary is something that has been studied during recent years in Spain, the situation is very different in Latin America, where papers on this subject are very limited. In this case, this paper aims to study the denominations for illnesses that were discovered in a 18th Century New Spain document corpus: to do so, the corpus will be described and then the vocabulary used in the documents will be analysed; the paper will pay special attention to questions such as neologisms, fluctuating words and the presence of colloquial vocabulary. Thus, the purposes of the paper are three: 1 to demonstrate the importance of official documents for the study of medical vocabulary; 2 to provide some data for writing the history of this vocabulary; and 3 to note some analyses that should be done in the future. 

  12. Exact Quantization of the Even-Denominator Fractional Quantum Hall State at ν =5/2 Landau Level Filling Factor

    International Nuclear Information System (INIS)

    Pan, W.; Tsui, D.C.; Pan, W.; Xia, J.; Shvarts, V.; Adams, D.E.; Xia, J.; Shvarts, V.; Adams, D.E.; Stormer, H.L.; Stormer, H.L.; Pfeiffer, L.N.; Baldwin, K.W.; West, K.W.

    1999-01-01

    We report ultralow temperature experiments on the obscure fractional quantum Hall effect at Landau level filling factor ν=5/2 in a very high-mobility specimen of μ=1.7x10 7 cm 2 /V s . We achieve an electron temperature as low as ∼4 mK , where we observe vanishing R xx and, for the first time, a quantized Hall resistance, R xy =h/(5/2)e 2 to within 2ppm. R xy at the neighboring odd-denominator states ν=7/3 and 8/3 is also quantized. The temperature dependences of the R xx minima at these fractional fillings yield activation energy gaps Δ 5/2 =0.11 , Δ 7/3 =0.10 , and Δ 8/3 =0.055 K . copyright 1999 The American Physical Society

  13. [The God image in relation to autistic traits and religious denomination].

    Science.gov (United States)

    Schaap-Jonker, H; van Schothorst-van Roekel, J; Sizoo, B

    2012-01-01

    Estimates of the prevalence of autism spectrum disorders (ASD) range from 0.6 to 1.0 per cent of the general population. Among the characteristic traits of ASD are qualitative impairments in social reciprocity and in abstract imagination. Not surprisingly, these traits can affect the personal religion of ASD patients, in the same manner as religious background does. To determine to what extent the religiousness of religious patients is associated with autistic traits and religious background. Dutch adults attending a Protestant mental healthcare institution as outpatients were asked to complete the 'Questionnaire God Image' (QGI) and the 'Autism Quotient' (AQNL). In this cross-sectional study various aspects of the God image were related to autistic traits and religious background. The more that respondents reported autistic traits, the greater was their fear of God and the less positive were their feelings. Respondents who were strict Calvinists experienced greater fear of God than did other respondents. Treatment of religious patients with asd needs to take into account these patients' greater fear of God and their less positive feelings. Those patients who had had a strict Calvinist upbringing had a more pronounced fear of God.

  14. Social Deprivation, Community Cohesion, Denominational Education and Freedom of Choice: A Marxist Perspective on Poverty and Exclusion in the District of Thanet

    Science.gov (United States)

    Welsh, Paul J.

    2008-01-01

    Thanet suffers from severe deprivation, mainly driven by socio-economic factors. Efforts to remediate this through economic regeneration plans have largely been unsuccessful, while a combination of selective and denominational education creates and maintains a gradient of disadvantage that mainly impacts upon already-deprived young people. Some of…

  15. Review Essay: Narration Theory as Possible Common Denominator of the Humanities

    Directory of Open Access Journals (Sweden)

    Harald Weilnböck

    2006-05-01

    within a paper given by physicians that shows the methodological problems of "deconstructionist" approaches. URN: urn:nbn:de:0114-fqs0603226

  16. Design method for low order two-degree-of-freedom controller based on Pade approximation of the denominator series expansion

    International Nuclear Information System (INIS)

    Ishikawa, Nobuyuki; Suzuki, Katsuo

    1999-01-01

    Having advantages of setting independently feedback characteristics such as disturbance rejection specification and reference response characteristics, two-degree-of-freedom (2DOF) control is widely utilized to improve the control performance. The ordinary design method such as model matching usually derives high-ordered feedforward element of 2DOF controller. In this paper, we propose a new design method for low order feedforward element which is based on Pade approximation of the denominator series expansion. The features of the proposed method are as follows: (1) it is suited to realize reference response characteristics in low frequency region, (2) the order of the feedforward element can be selected apart from the feedback element. These are essential to the 2DOF controller design. With this method, 2DOF reactor power controller is designed and its control performance is evaluated by numerical simulation with reactor dynamics model. For this evaluation, it is confirmed that the controller designed by the proposed method possesses equivalent control characteristics to the controller by the ordinary model matching method. (author)

  17. Comprehensive Study of Honey with Protected Denomination of Origin and Contribution to the Enhancement of Legal Specifications

    Directory of Open Access Journals (Sweden)

    Leticia M. Estevinho

    2012-07-01

    Full Text Available In this study the characterization of a total of 60 honey samples with Protected Denomination of Origin (PDO collected over three harvests (2009–2011, inclusive, from the Northeast of Portugal was carried out based on the presence of pollen, physicochemical and microbiological characteristics. All samples were found to meet the European Legislation, but some didn’t meet the requirements of the PDO specifications. Concerning the floral origin of honey, our results showed the prevalence of rosemary (Lavandula pedunculata pollen. The microbiological quality of all the analyzed samples was satisfactory, since fecal coliforms, sulfite-reducing clostridia and Salmonella were absent, and molds and yeasts were detected in low counts. Significant differences between the results were studied using one-way analysis of variance (ANOVA, followed by Tukey’s HSD test. The samples were submitted to discriminant function analysis, in order to determine which variables differentiate between two or more naturally occurring groups (Forward Stepwise Analysis. The variables selected were in this order: diastase activity, pH, reducing sugars, free acidity and HMF. The pollen spectrum has perfect discriminatory power. This is the first study in which a honey with PDO was tested, in order to assess its compliance with the PDO book of specifications.

  18. Effective hypernetted-chain study of even-denominator-filling state of the fractional quantum Hall effect

    International Nuclear Information System (INIS)

    Ciftja, O.

    1999-01-01

    The microscopic approach for studying the half-filled state of the fractional quantum Hall effect is based on the idea of proposing a trial Fermi wave function of the Jastrow-Slater form, which is then fully projected onto the lowest Landau level. A simplified starting point is to drop the projection operator and to consider an unprojected wave function. A recent study claims that such a wave function approximated in a Jastrow form may still constitute a good starting point on the study of the half-filled state. In this paper we formalize the effective hypernetted-chain approximation and apply it to the unprojected Fermi wave function, which describes the even-denominator-filling states. We test the above approximation by using the Fermi hypernetted-chain theory, which constitutes the natural choice for the present case. Our results suggest that the approximation of the Slater determinant of plane waves as a Jastrow wave function may not be a very accurate approximation. We conclude that the lowest Landau-level projection operator cannot be neglected if one wants a better quantitative understanding of the phenomena. copyright 1999 The American Physical Society

  19. On Estimation of Problems and Prospects of Competitiveness Growth on the World Market of Educational Services by Russian Higher Schools

    Directory of Open Access Journals (Sweden)

    V N Denisenko

    2013-12-01

    Full Text Available The analysis of the survey of the representatives of the Russian higher schools on the problems of the international educational cooperation and the participation in the international academic mobility is submitted in the article.

  20. Stochastic differential equations as a tool to regularize the parameter estimation problem for continuous time dynamical systems given discrete time measurements.

    Science.gov (United States)

    Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats

    2014-05-01

    In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  1. A model problem for estimation of moving-film time relaxation at sudden change of boundary conditions

    Science.gov (United States)

    Smirnovsky, Alexander A.; Eliseeva, Viktoria O.

    2018-05-01

    The study of the film flow occurred under the influence of a gas slug flow is of definite interest in heat and mass transfer during the motion of a coolant in the second circuit of a nuclear water-water reactor. Thermohydraulic codes are usually used for analysis of the such problems in which the motion of the liquid film and the vapor is modeled on the basis of a one-dimensional balance equations. Due to a greater inertia of the liquid film motion, film flow parameters changes with a relaxation compared with gas flow. We consider a model problem of film flow under the influence of friction from gas slug flow neglecting such effects as wave formation, droplet breakage and deposition on the film surface, evaporation and condensation. Such a problem is analogous to the well-known problems of Couette and Stokes flows. An analytical solution has been obtained for laminar flow. Numerical RANS-based simulation of turbulent flow was performed using OpenFOAM. It is established that the relaxation process is almost self-similar. This fact opens a possibility of obtaining valuable correlations for the relaxation time.

  2. Solution of Inverse Problems using Bayesian Approach with Application to Estimation of Material Parameters in Darcy Flow

    Czech Academy of Sciences Publication Activity Database

    Domesová, Simona; Beres, Michal

    2017-01-01

    Roč. 15, č. 2 (2017), s. 258-266 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Bayesian statistics * Cross-Entropy method * Darcy flow * Gaussian random field * inverse problem Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://advances.utc.sk/index.php/AEEE/article/view/2236

  3. PROBLEMAS DE ESTIMACIÓN DE MAGNITUDES NO ALCANZABLES: ESTRATEGIAS Y ÉXITO EN LA RESOLUCIÓN (Unreachable Magnitude Estimation Problems: Strategies and Solving Success

    Directory of Open Access Journals (Sweden)

    Núria Gorgorió

    2013-03-01

    Full Text Available Llamamos problemas de Fermi a aquellos problemas que, siendo de difícil resolución, admiten una aproximación a su solución a base de romper el problema en partes más pequeñas y resolverlas por separado. En este artículo presentamos los problemas de estimación de magnitudes no alcanzables (PEMNA como un subconjunto de los problemas de Fermi. A partir de los datos recopilados en un estudio hecho con alumnos de 12 a 16 años, caracterizamos las distintas estrategias de resolución propuestas por estos y discutimos sobre la potencialidad de estas estrategias para resolver los problemas con éxito. Fermi problems are problems that, being difficult to solve, can be satisfactorily solved if they are broken down into smaller pieces that are solved separately. In this article, we present inconceivable magnitude estimation problems as a subset of Fermi problems. Based on data collected from a study carried out with 12 to 16 years old students, we describe the different strategies for solving the problems that were proposed by the students, and discuss the potential of these strategies to successfully solve the problems.

  4. A Carleman estimate and the balancing principle in the quasi-reversibility method for solving the Cauchy problem for the Laplace equation

    International Nuclear Information System (INIS)

    Cao Hui; Pereverzev, Sergei V; Klibanov, Michael V

    2009-01-01

    The quasi-reversibility method of solving the Cauchy problem for the Laplace equation in a bounded domain Ω is considered. With the help of the Carleman estimation technique improved error and stability bounds in a subdomain Ω σ is a subset of Ω are obtained. This paves the way for the use of the balancing principle for an a posteriori choice of the regularization parameter ε in the quasi-reversibility method. As an adaptive regularization parameter choice strategy, the balancing principle does not require a priori knowledge of either the solution smoothness or a constant K appearing in the stability bound estimation. Nevertheless, this principle allows an a posteriori parameter choice that up to a controllable constant achieves the best accuracy guaranteed by the Carleman estimate

  5. On the Problem of Attribute Selection for Software Cost Estimation: Input Backward Elimination Using Artificial Neural Networks

    OpenAIRE

    Papatheocharous , Efi; Andreou , Andreas S.

    2010-01-01

    International audience; Many parameters affect the cost evolution of software projects. In the area of software cost estimation and project management the main challenge is to understand and quantify the effect of these parameters, or 'cost drivers', on the effort expended to develop software systems. This paper aims at investigating the effect of cost attributes on software development effort using empirical databases of completed projects and building Artificial Neural Network (ANN) models ...

  6. Robust Preconditioning Estimates for Convection-Dominated Elliptic Problems via a Streamline Poincaré--Friedrichs Inequality

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.; Kovács, B.

    2014-01-01

    Roč. 52, č. 6 (2014), s. 2957-2976 ISSN 0036-1429 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : streamline diffusion finite element method * solving convection-dominated elliptic problems * convergence is robust Subject RIV: BA - General Mathematics Impact factor: 1.788, year: 2014 http://epubs.siam.org/doi/abs/10.1137/130940268

  7. The joint estimation of term structures and credit spreads

    NARCIS (Netherlands)

    Houweling, P.; Hoek, J.; Kleibergen, F.R.

    1999-01-01

    We present a new framework for the joint estimation of the default-free government term structure and corporate credit spread curves. By using a data set of liquid, German mark denominated bonds, we show that this yields more realistic spreads than traditionally obtained spread curves that result

  8. A combined ANN-GA and experimental based technique for the estimation of the unknown heat flux for a conjugate heat transfer problem

    Science.gov (United States)

    M K, Harsha Kumar; P S, Vishweshwara; N, Gnanasekaran; C, Balaji

    2018-05-01

    The major objectives in the design of thermal systems are obtaining the information about thermophysical, transport and boundary properties. The main purpose of this paper is to estimate the unknown heat flux at the surface of a solid body. A constant area mild steel fin is considered and the base is subjected to constant heat flux. During heating, natural convection heat transfer occurs from the fin to ambient. The direct solution, which is the forward problem, is developed as a conjugate heat transfer problem from the fin and the steady state temperature distribution is recorded for any assumed heat flux. In order to model the natural convection heat transfer from the fin, an extended domain is created near the fin geometry and air is specified as a fluid medium and Navier Stokes equation is solved by incorporating the Boussinesq approximation. The computational time involved in executing the forward model is then reduced by developing a neural network (NN) between heat flux values and temperatures based on back propagation algorithm. The conjugate heat transfer NN model is now coupled with Genetic algorithm (GA) for the solution of the inverse problem. Initially, GA is applied to the pure surrogate data, the results are then used as input to the Levenberg- Marquardt method and such hybridization is proven to result in accurate estimation of the unknown heat flux. The hybrid method is then applied for the experimental temperature to estimate the unknown heat flux. A satisfactory agreement between the estimated and actual heat flux is achieved by incorporating the hybrid method.

  9. The problem of the second wind turbine – a note on a common but flawed wind power estimation method

    Directory of Open Access Journals (Sweden)

    A. Kleidon

    2012-06-01

    Full Text Available Several recent wind power estimates suggest that this renewable energy resource can meet all of the current and future global energy demand with little impact on the atmosphere. These estimates are calculated using observed wind speeds in combination with specifications of wind turbine size and density to quantify the extractable wind power. However, this approach neglects the effects of momentum extraction by the turbines on the atmospheric flow that would have effects outside the turbine wake. Here we show with a simple momentum balance model of the atmospheric boundary layer that this common methodology to derive wind power potentials requires unrealistically high increases in the generation of kinetic energy by the atmosphere. This increase by an order of magnitude is needed to ensure momentum conservation in the atmospheric boundary layer. In the context of this simple model, we then compare the effect of three different assumptions regarding the boundary conditions at the top of the boundary layer, with prescribed hub height velocity, momentum transport, or kinetic energy transfer into the boundary layer. We then use simulations with an atmospheric general circulation model that explicitly simulate generation of kinetic energy with momentum conservation. These simulations show that the assumption of prescribed momentum import into the atmospheric boundary layer yields the most realistic behavior of the simple model, while the assumption of prescribed hub height velocity can clearly be disregarded. We also show that the assumptions yield similar estimates for extracted wind power when less than 10% of the kinetic energy flux in the boundary layer is extracted by the turbines. We conclude that the common method significantly overestimates wind power potentials by an order of magnitude in the limit of high wind power extraction. Ultimately, environmental constraints set the upper limit on wind power potential at larger scales rather than

  10. A model reduction approach for the variational estimation of vascular compliance by solving an inverse fluid–structure interaction problem

    International Nuclear Information System (INIS)

    Bertagna, Luca; Veneziani, Alessandro

    2014-01-01

    Scientific computing has progressively become an important tool for research in cardiovascular diseases. The role of quantitative analyses based on numerical simulations has moved from ‘proofs of concept’ to patient-specific investigations, thanks to a strong integration between imaging and computational tools. However, beyond individual geometries, numerical models require the knowledge of parameters that are barely retrieved from measurements, especially in vivo. For this reason, recently cardiovascular mathematics considered data assimilation procedures for extracting the knowledge of patient-specific parameters from measures and images. In this paper, we consider specifically the quantification of vascular compliance, i.e. the parameter quantifying the tendency of arterial walls to deform under blood stress. Following up a previous paper, where a variational data assimilation procedure was proposed, based on solving an inverse fluid–structure interaction problem, here we consider model reduction techniques based on a proper orthogonal decomposition approach to accomplish the solution of the inverse problem in a computationally efficient way. (paper)

  11. Thermodynamic data for modeling acid mine drainage problems: compilation and estimation of data for selected soluble iron-sulfate minerals

    Science.gov (United States)

    Hemingway, Bruch S.; Seal, Robert R.; Chou, I-Ming

    2002-01-01

    Enthalpy of formation, Gibbs energy of formation, and entropy values have been compiled from the literature for the hydrated ferrous sulfate minerals melanterite, rozenite, and szomolnokite, and a variety of other hydrated sulfate compounds. On the basis of this compilation, it appears that there is no evidence for an excess enthalpy of mixing for sulfate-H2O systems, except for the first H2O molecule of crystallization. The enthalpy and Gibbs energy of formation of each H2O molecule of crystallization, except the first, in the iron(II) sulfate - H2O system is -295.15 and -238.0 kJ?mol-1, respectively. The absence of an excess enthalpy of mixing is used as the basis for estimating thermodynamic values for a variety of ferrous, ferric, and mixed-valence sulfate salts of relevance to acid-mine drainage systems.

  12. Estimation of interfacial heat transfer coefficient in inverse heat conduction problems based on artificial fish swarm algorithm

    Science.gov (United States)

    Wang, Xiaowei; Li, Huiping; Li, Zhichao

    2018-04-01

    The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.

  13. Data assimilation and uncertainty analysis of environmental assessment problems--an application of Stochastic Transfer Function and Generalised Likelihood Uncertainty Estimation techniques

    International Nuclear Information System (INIS)

    Romanowicz, Renata; Young, Peter C.

    2003-01-01

    Stochastic Transfer Function (STF) and Generalised Likelihood Uncertainty Estimation (GLUE) techniques are outlined and applied to an environmental problem concerned with marine dose assessment. The goal of both methods in this application is the estimation and prediction of the environmental variables, together with their associated probability distributions. In particular, they are used to estimate the amount of radionuclides transferred to marine biota from a given source: the British Nuclear Fuel Ltd (BNFL) repository plant in Sellafield, UK. The complexity of the processes involved, together with the large dispersion and scarcity of observations regarding radionuclide concentrations in the marine environment, require efficient data assimilation techniques. In this regard, the basic STF methods search for identifiable, linear model structures that capture the maximum amount of information contained in the data with a minimal parameterisation. They can be extended for on-line use, based on recursively updated Bayesian estimation and, although applicable to only constant or time-variable parameter (non-stationary) linear systems in the form used in this paper, they have the potential for application to non-linear systems using recently developed State Dependent Parameter (SDP) non-linear STF models. The GLUE based-methods, on the other hand, formulate the problem of estimation using a more general Bayesian approach, usually without prior statistical identification of the model structure. As a result, they are applicable to almost any linear or non-linear stochastic model, although they are much less efficient both computationally and in their use of the information contained in the observations. As expected in this particular environmental application, it is shown that the STF methods give much narrower confidence limits for the estimates due to their more efficient use of the information contained in the data. Exploiting Monte Carlo Simulation (MCS) analysis

  14. A theoretical approach to the problem of dose-volume constraint estimation and their impact on the dose-volume histogram selection

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Stavrev, Pavel; Stavreva, Nadia; Fallone, B. Gino

    2006-01-01

    This paper outlines a theoretical approach to the problem of estimating and choosing dose-volume constraints. Following this approach, a method of choosing dose-volume constraints based on biological criteria is proposed. This method is called ''reverse normal tissue complication probability (NTCP) mapping into dose-volume space'' and may be used as a general guidance to the problem of dose-volume constraint estimation. Dose-volume histograms (DVHs) are randomly simulated, and those resulting in clinically acceptable levels of complication, such as NTCP of 5±0.5%, are selected and averaged producing a mean DVH that is proven to result in the same level of NTCP. The points from the averaged DVH are proposed to serve as physical dose-volume constraints. The population-based critical volume and Lyman NTCP models with parameter sets taken from literature sources were used for the NTCP estimation. The impact of the prescribed value of the maximum dose to the organ, D max , on the averaged DVH and the dose-volume constraint points is investigated. Constraint points for 16 organs are calculated. The impact of the number of constraints to be fulfilled based on the likelihood that a DVH satisfying them will result in an acceptable NTCP is also investigated. It is theoretically proven that the radiation treatment optimization based on physical objective functions can sufficiently well restrict the dose to the organs at risk, resulting in sufficiently low NTCP values through the employment of several appropriate dose-volume constraints. At the same time, the pure physical approach to optimization is self-restrictive due to the preassignment of acceptable NTCP levels thus excluding possible better solutions to the problem

  15. Estimating the burden of illness in an Ontario community with untreated drinking water and sewage disposal problems.

    Science.gov (United States)

    Chambers, L W; Shimoda, F; Walter, S D; Pickard, L; Hunter, B; Ford, J; Deivanayagam, N; Cunningham, I

    1989-01-01

    The Hamilton-Wentworth regional health department was asked by one of its municipalities to determine whether the present water supply and sewage disposal methods used in a community without piped water and regional sewage disposal posed a threat to the health of its residents. Three approaches were used: assessments by public health inspectors of all households; bacteriological and chemical analyses of water samples; and completion of a specially designed questionnaire by residents in the target community and a control community. 89% of the 227 residences in the target community were found to have a drinking water supply that, according to the Ministry of Environment guidelines, was unsafe and/or unsatisfactory. According to on-site inspections, 32% of households had sewage disposal problems. Responses to the questionnaire revealed that the target community residents reported more symptoms associated with enteric infections due to the water supply. Two of these symptoms, diarrhea and stomach cramps, had a relative risk of 2.2 when compared to the control community (p less than 0.05). The study was successfully used by the municipality to argue for provincial funding of piped water.

  16. Osteosarcoma induction by plutonium-239, americium-241 and neptunium-237 : the problem of deriving risk estimates for man

    International Nuclear Information System (INIS)

    Taylor, D.M.

    1988-01-01

    Spontaneous bone cancer (osteosarcoma) represents only about 0.3% of all human cancers, but is well known to be inducible in humans by internal contamination with radium-226 and radium-224. plutonium-239, americium-241 and neptunium-237 form, or will form, the principal long-lived alpha particle emitting components of high activity waste and burnt-up nuclear fuel elements. These three nuclides deposit extensively in human bone and although, fortunately, no case of a human osteosarcoma induced by any of these nuclides is known, evidence from animal studies suggests that all three are more effective than radium-226 in inducing osteosarcoma. The assumption that the ratio of the risk factors, the number of osteosarcoma expected per 10000 person/animal Gy, for radium-226 and any other bone-seeking alpha-emitter will be independent of animal species has formed the basis of all the important studies of the radiotoxicity of actinide nuclides in experimental animals. The aim of this communication is to review the risk factors which may be calculated from the various animal studies carried out over the last thirty years with plutonium-237, americium-241 and neptunium-237 and to consider the problems which may arise in extrapolating these risk factors to homo sapiens

  17. Estimating the Effect and Economic Impact of Absenteeism, Presenteeism, and Work Environment-Related Problems on Reductions in Productivity from a Managerial Perspective.

    Science.gov (United States)

    Strömberg, Carl; Aboagye, Emmanuel; Hagberg, Jan; Bergström, Gunnar; Lohela-Karlsson, Malin

    2017-09-01

    The aim of this study was to propose wage multipliers that can be used to estimate the costs of productivity loss for employers in economic evaluations, using detailed information from managers. Data were collected in a survey panel of 758 managers from different sectors of the labor market. Based on assumed scenarios of a period of absenteeism due to sickness, presenteeism and work environment-related problem episodes, and specified job characteristics (i.e., explanatory variables), managers assessed their impact on group productivity and cost (i.e., the dependent variable). In an ordered probit model, the extent of productivity loss resulting from job characteristics is predicted. The predicted values are used to derive wage multipliers based on the cost of productivity estimates provided by the managers. The results indicate that job characteristics (i.e., degree of time sensitivity of output, teamwork, or difficulty in replacing a worker) are linked to productivity loss as a result of health-related and work environment-related problems. The impact of impaired performance on productivity differs among various occupations. The mean wage multiplier is 1.97 for absenteeism, 1.70 for acute presenteeism, 1.54 for chronic presenteeism, and 1.72 for problems related to the work environment. This implies that the costs of health-related and work environment-related problems to organizations can exceed the worker's wage. The use of wage multipliers is recommended for calculating the cost of health-related and work environment-related productivity loss to properly account for actual costs. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Local heat transfer estimation in microchannels during convective boiling under microgravity conditions: 3D inverse heat conduction problem using BEM techniques

    Science.gov (United States)

    Luciani, S.; LeNiliot, C.

    2008-11-01

    Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).

  19. Comparative demography of an epiphytic lichen: support for general life history patterns and solutions to common problems in demographic parameter estimation.

    Science.gov (United States)

    Shriver, Robert K; Cutler, Kerry; Doak, Daniel F

    2012-09-01

    Lichens are major components in many terrestrial ecosystems, yet their population ecology is at best only poorly understood. Few studies have fully quantified the life history or demographic patterns of any lichen, with particularly little attention to epiphytic species. We conducted a 6-year demographic study of Vulpicida pinastri, an epiphytic foliose lichen, in south-central Alaska. After testing multiple size-structured functions to describe patterns in each V. pinastri demographic rate, we used the resulting estimates to construct a stochastic demographic model for the species. This model development led us to propose solutions to two general problems in construction of demographic models for many taxa: how to simply but accurately characterize highly skewed growth rates, and how to estimate recruitment rates that are exceptionally difficult to directly observe. Our results show that V. pinastri has rapid and variable growth and, for small individuals, low and variable survival, but that these traits are coupled with considerable longevity (e.g., >50 years mean future life span for a 4-cm(2) thallus) and little deviation of the stochastic population growth rate from the deterministic expectation. Comparisons of the demographic patterns we found with those of other lichen studies suggest that their relatively simple architecture may allow clearer generalities about growth patterns for lichens than for other taxa, and that the expected pattern of faster growth rates for epiphytic species is substantiated.

  20. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    Science.gov (United States)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  1. The tangible common denominator of substance use disorders: a reply to commentaries to Rehm et al. (2013a)

    NARCIS (Netherlands)

    Rehm, J.; Anderson, P.; Gual, A.; Kraus, L.; Marmet, S.; Nutt, D.J.; Room, R.; Samokhvalov, A.V.; Scafato, E.; Shield, K.D.; Trapencieris, M.; Wiers, R.W.; Gmel, G.

    2014-01-01

    In response to our suggestion to define substance use disorders via ‘heavy use over time’, theoretical and conceptual issues, measurement problems and implications for stigma and clinical practice were raised. With respect to theoretical and conceptual issues, no other criterion has been shown,

  2. Global and regional aspects for genesis of catastrophic floods - the problems of forecasting and estimates for mass and water balance (surface and groundwater contribution)

    Science.gov (United States)

    Trifonova, Tatiana; Arakelian, Sergei; Trifonov, Dmitriy; Abrakhin, Sergei

    2017-04-01

    1. The principal goal of present talk is, to discuss the existing uncertainty and discrepancy between water balance estimation for the area under heavy rain flood, on the one hand from the theoretical approach and reasonable data base due to rainfall going from atmosphere and, on the other hand the real practicle surface water flow parameters measured by some methods and/or fixed by some eye-witness (cf. [1]). The vital item for our discussion is that the last characteristics sometimes may be noticeably grater than the first ones. Our estimations show the grater water mass discharge observation during the events than it could be expected from the rainfall process estimation only [2]. The fact gives us the founding to take into account the groundwater possible contribution to the event. 2. We carried out such analysis, at least, for two catastrophic water events in 2015, i.e. (1) torrential rain and catastrophic floods in Lousiana (USA), June 16-20; (2) Assam flood (India), Aug. 22 - Sept. 8. 3. Groundwater flood of a river terrace discussed e.g. in [3] but in respect when rise of the water table above the land surface occurs coincided with intense rainfall and being as a relatively rare phenomenon. In our hypothesis the principal part of possible groundwater exit to surface is connected with a crack-net system state in earth-crust (including deep layers) as a water transportation system, first, being in variated pressure field for groundwater basin and, second, modified by different reasons ( both suddenly (the Krimsk-city flash flood event, July 2012, Russia) and/or smoothly (the Amur river flood event, Aug.-Sept. 2013, Russia) ). Such reconstruction of 3D crack-net under external reasons (resulting even in local variation of pressures in any crack-section) is a principal item for presented approach. 4. We believe that in some cases the interconnection of floods and preceding earthquakes may occur. The problem discuss by us for certain events ( e.g. in addition to

  3. Cost Estimates and Investment Decisions

    International Nuclear Information System (INIS)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  4. Statistical significant change versus relevant or important change in (quasi) experimental design : some conceptual and methodological problems in estimating magnitude of intervention-related change in health services research

    NARCIS (Netherlands)

    Middel, Berrie; van Sonderen, Eric

    2002-01-01

    This paper aims to identify problems in estimating and the interpretation of the magnitude of intervention-related change over time or responsiveness assessed with health outcome measures. Responsiveness is a problematic construct and there is no consensus on how to quantify the appropriate index to

  5. Assessment of psychosocial problems in children with type 1 diabetes and their families: the added value of using standardised questionnaires in addition to clinical estimations of nurses and paediatricians

    NARCIS (Netherlands)

    Boogerd, E.A.; Damhuis, A.M.A.; Velden, J.A.M. van der; Steeghs, M.C.C.H.; Noordam, C.; Verhaak, C.M.; Vermaes, I.P.

    2015-01-01

    AIMS AND OBJECTIVES: To investigate the assessment of psychosocial problems in children with type 1 diabetes by means of clinical estimations made by nurses and paediatricians and by using standardised questionnaires. BACKGROUND: Although children with type 1 diabetes and their parents show

  6. Assessment of psychosocial problems in children with type 1 diabetes and their families: The added value of using standardised questionnaires in addition to clinical estimations of nurses and paediatricians

    NARCIS (Netherlands)

    Boogerd, E.A.; Damhuis, A.M.A.; Alfen-van der Velden, A.A.E.M. van; Steeghs, M.C.C.H.; Noordam, C.; Verhaak, C.M.; Vermaes, I.P.R.

    2015-01-01

    Aims and objectives: To investigate the assessment of psychosocial problems in children with type 1 diabetes by means of clinical estimations made by nurses and paediatricians and by using standardised questionnaires. Background Although children with type 1 diabetes and their parents show increased

  7. Interface depolarization field as common denominator of fatigue and size effect in Pb(Zr0.54Ti0.46)O3 ferroelectric thin film capacitors

    Science.gov (United States)

    Bouregba, R.; Sama, N.; Soyer, C.; Poullain, G.; Remiens, D.

    2010-05-01

    Dielectric, hysteresis and fatigue measurements are performed on Pb(Zr0.54Ti0.46)O3 (PZT) thin film capacitors with different thicknesses and different electrode configurations, using platinum and LaNiO3 conducting oxide. The data are compared with those collected in a previous work devoted to study of size effect by R. Bouregba et al., [J. Appl. Phys. 106, 044101 (2009)]. Deterioration of the ferroelectric properties, consecutive to fatigue cycling and thickness downscaling, presents very similar characteristics and allows drawing up a direct correlation between the two phenomena. Namely, interface depolarization field (Edep) resulting from interface chemistry is found to be the common denominator, fatigue phenomena is manifestation of strengthen of Edep in the course of time. Change in dielectric permittivity, in remnant and coercive values as well as in the shape of hysteresis loops are mediated by competition between degradation of dielectric properties of the interfaces and possible accumulation of interface space charge. It is proposed that presence in the band gap of trap energy levels with large time constant due to defects in small nonferroelectric regions at the electrode—PZT film interfaces ultimately governs the aging process. Size effect and aging process may be seen as two facets of the same underlying mechanism, the only difference lies in the observation time of the phenomena.

  8. Homeless Mentally Ill: Problems and Options in Estimating Numbers and Trends. Report to the Chairman, Committee on Labor and Human Resources, U.S. Senate.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Program Evaluation and Methodology Div.

    In response to a request by the United States Senate Committee on Labor and Human Resources, the General Accounting Office (GAO) examined the methodological soundness of current population estimates of the number of homeless chronically mentally ill persons, and proposed several options for estimating the size of this population. The GAO reviewed…

  9. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  10. In Lands of Foreign Currency Credit, Bank Lending Channels Run Through? The Effects of Monetary Policy at Home and Abroad on the Currency Denomination of the Supply of Credit

    OpenAIRE

    Steven Ongena; Ibolya Schindele; Dzsamila Vonnak

    2014-01-01

    We analyze the differential impact of domestic and foreign monetary policy on the local supply of bank credit in domestic and foreign currencies. We analyze a novel, supervisory dataset from Hungary that records all bank lending to firms including its currency denomination. Accounting for time-varying firm-specific heterogeneity in loan demand, we find that a lower domestic interest rate expands the supply of credit in the domestic but not in the foreign currency. A lower foreign interest rat...

  11. Estimates of the stabilization rate as t→∞ of solutions of the first mixed problem for a quasilinear system of second-order parabolic equations

    International Nuclear Information System (INIS)

    Kozhevnikova, L M; Mukminov, F Kh

    2000-01-01

    A quasilinear system of parabolic equations with energy inequality is considered in a cylindrical domain {t>0}xΩ. In a broad class of unbounded domains Ω two geometric characteristics of a domain are identified which determine the rate of convergence to zero as t→∞ of the L 2 -norm of a solution. Under additional assumptions on the coefficients of the quasilinear system estimates of the derivatives and uniform estimates of the solution are obtained; they are proved to be best possible in the order of convergence to zero in the case of one semilinear equation

  12. An analytic solution to the homogeneous EIT problem on the 2D disk and its application to estimation of electrode contact impedances

    International Nuclear Information System (INIS)

    Demidenko, Eugene

    2011-01-01

    An analytic solution of the potential distribution on a 2D homogeneous disk for electrical impedance tomography under the complete electrode model is expressed via an infinite system of linear equations. For the shunt electrode model with two electrodes, our solution coincides with the previously derived solution expressed via elliptic integral (Pidcock et al 1995 Physiol. Meas. 16 77–90). The Dirichlet-to-Neumann map is derived for statistical estimation via nonlinear least squares. The solution is validated in phantom experiments and applied for breast contact impedance estimation in vivo. Statistical hypothesis testing is used to test whether the contact impedances are the same across electrodes or all equal zero. Our solution can be especially useful for a rapid real-time test for bad surface contact in clinical setting

  13. Boundary-value problems with integral conditions for a system of Lame equations in the space of almost periodic functions

    Directory of Open Access Journals (Sweden)

    Volodymyr S. Il'kiv

    2016-11-01

    Full Text Available We study a problem with integral boundary conditions in the time coordinate for a system of Lame equations of dynamic elasticity theory of an arbitrary dimension. We find necessary and sufficient conditions for the existence and uniqueness of solution in the class of almost periodic functions in the spatial variables. To solve the problem of small denominators arising while constructing solutions, we use the metric approach.

  14. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  15. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    Science.gov (United States)

    2013-08-01

    both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3

  16. A Bootstrap-Based Probabilistic Optimization Method to Explore and Efficiently Converge in Solution Spaces of Earthquake Source Parameter Estimation Problems: Application to Volcanic and Tectonic Earthquakes

    Science.gov (United States)

    Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.

    2017-12-01

    Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.

  17. The accident at the Chernobyl nuclear power plant and the problem of estimating the consequences of radioactive contamination of natural and agricultural ecological systems

    International Nuclear Information System (INIS)

    Alexakhin, R.M.; Geraskin, S.A.; Fesenko, S.V.

    1996-01-01

    Heavy radiation accidents cause long-term low-dose biota irradiation on large territories. In this situation of great importance is a correct estimation of danger of low-dose irradiation. Approaches now in use to assess the genetic consequences of irradiation are based on linear extrapolation of biological effects induced by high and medium doses to the region of low doses. However, models based on the linear non threshold hypothesis lack strong biological justification and come into conflict with the experimental data available. Our experiments with agricultural crops aimed at studying regularities in the induction cytogenetic damages using test-systems have demonstrated that the form of the dose-effect curve in the domain of low exposure values shows a pronounced linearity and the presence of a dose-independent region. A comparison of the experimentally revealed form of the empirical curve with results obtained for other objects (human lymphocytes, fibroblasts of Chinese hamster, seedlings of horse beans, etc.) allows a conclusion to be made that the relationship between the yield of radiation induced cytogenetic disturbances and dose is non-linear and universal in character, varying for different objects only in dose values at which changes in the relationship nature occur. So, the observed genetic effects in the region of low doses result from peculiarities in the cellular response to weak external action rather than damaging impact of ionising radiation or other factors of physical or chemical nature

  18. Balance Problems

    Science.gov (United States)

    ... often, it could be a sign of a balance problem. Balance problems can make you feel unsteady. You may ... related injuries, such as a hip fracture. Some balance problems are due to problems in the inner ...

  19. [Population problem, comprehension problem].

    Science.gov (United States)

    Tallon, F

    1993-08-01

    Overpopulation of developing countries in general, and Rwanda in particular, is not just their problem but a problem for developed countries as well. Rapid population growth is a key factor in the increase of poverty in sub-Saharan Africa. Population growth outstrips food production. Africa receives more and more foreign food, economic, and family planning aid each year. The Government of Rwanda encourages reduced population growth. Some people criticize it, but this criticism results in mortality and suffering. One must combat this ignorance, but attitudes change slowly. Some of these same people find the government's acceptance of family planning an invasion of their privacy. Others complain that rich countries do not have campaigns to reduce births, so why should Rwanda do so? The rate of schooling does not increase in Africa, even though the number of children in school increases, because of rapid population growth. Education is key to improvements in Africa's socioeconomic growth. Thus, Africa, is underpopulated in terms of potentiality but overpopulated in terms of reality, current conditions, and possibilities of overexploitation. Africa needs to invest in human resources. Families need to save, and to so, they must refrain from having many children. Africa should resist the temptation to waste, as rich countries do, and denounce it. Africa needs to become more independent of these countries, but structural adjustment plans, growing debt, and rapid population growth limit national independence. Food aid is a means for developed countries to dominate developing countries. Modernization through foreign aid has had some positive effects on developing countries (e.g., improved hygiene, mortality reduction), but these also sparked rapid population growth. Rwandan society is no longer traditional, but it is also not yet modern. A change in mentality to fewer births, better quality of life for living infants, better education, and less burden for women must occur

  20. Problems of estimating hydrological characteristics for small ...

    African Journals Online (AJOL)

    Rapid assessments of water resource availability in South Africa have been facilitated by the availability for a number of years of a national data set of naturalised monthly flow time series. However, these data are only available for moderate to large catchments (referred to as quaternary catchments). In the absence of ...

  1. METHODOLOGICAL PROBLEMS OF PRACTICAL RADIOGENIC RISK ESTIMATIONS

    Directory of Open Access Journals (Sweden)

    A. Т. Gubin

    2014-01-01

    Full Text Available Mathematical ratios were established according to the description of the calculation procedure for the values of the nominal risk coefficient given in the ICRP Recommendations 2007. It is shown that the lifetime radiogenic risk is a linear functional from the distribution of the dose in time with a multiplier descending with age. As a consequence, application of the nominal risk coefficient in the risk calculations is justified in the case when prolonged exposure is practically evenly distributed in time, and gives a significant deviation at a single exposure. When using the additive model of radiogenic risk proposed in the UNSCEAR Report 2006 for solid cancers, this factor is almost linearly decreasing with the age, which is convenient for its practical application.

  2. 非对称和不定椭圆问题的有限体积元方法的最大模估计%Maximum Norm Estimates for Finite Volume Element Method for Non-selfadjoint and Indefinite Elliptic Problems

    Institute of Scientific and Technical Information of China (English)

    毕春加

    2005-01-01

    In this paper, we establish the maximum norm estimates of the solutions of the finite volume element method (FVE) based on the P1 conforming element for the non-selfadjoint and indefinite elliptic problems.

  3. Perturbed asymptotically linear problems

    OpenAIRE

    Bartolo, R.; Candela, A. M.; Salvatore, A.

    2012-01-01

    The aim of this paper is investigating the existence of solutions of some semilinear elliptic problems on open bounded domains when the nonlinearity is subcritical and asymptotically linear at infinity and there is a perturbation term which is just continuous. Also in the case when the problem has not a variational structure, suitable procedures and estimates allow us to prove that the number of distinct crtitical levels of the functional associated to the unperturbed problem is "stable" unde...

  4. Genome size estimation: a new methodology

    Science.gov (United States)

    Álvarez-Borrego, Josué; Gallardo-Escárate, Crisitian; Kober, Vitaly; López-Bonilla, Oscar

    2007-03-01

    Recently, within the cytogenetic analysis, the evolutionary relations implied in the content of nuclear DNA in plants and animals have received a great attention. The first detailed measurements of the nuclear DNA content were made in the early 40's, several years before Watson and Crick proposed the molecular structure of the DNA. In the following years Hewson Swift developed the concept of "C-value" in reference to the haploid phase of DNA in plants. Later Mirsky and Ris carried out the first systematic study of genomic size in animals, including representatives of the five super classes of vertebrates as well as of some invertebrates. From these preliminary results it became evident that the DNA content varies enormously between the species and that this variation does not bear relation to the intuitive notion from the complexity of the organism. Later, this observation was reaffirmed in the following years as the studies increased on genomic size, thus denominating to this characteristic of the organisms like the "Paradox of the C-value". Few years later along with the no-codification discovery of DNA the paradox was solved, nevertheless, numerous questions remain until nowadays unfinished, taking to denominate this type of studies like the "C-value enigma". In this study, we reported a new method for genome size estimation by quantification of fluorescence fading. We measured the fluorescence intensity each 1600 milliseconds in DAPI-stained nuclei. The estimation of the area under the graph (integral fading) during fading period was related with the genome size.

  5. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  6. PN solutions for the slowing-down and the cell calculation problems in plane geometry

    International Nuclear Information System (INIS)

    Caldeira, Alexandre David

    1999-01-01

    In this work P N solutions for the slowing-down and cell problems in slab geometry are developed. To highlight the main contributions of this development, one can mention: the new particular solution developed for the P N method applied to the slowing-down problem in the multigroup model, originating a new class of polynomials denominated Chandrasekhar generalized polynomials; the treatment of a specific situation, known as a degeneracy, arising from a particularity in the group constants and the first application of the P N method, for arbitrary N, in criticality calculations at the cell level reported in literature. (author)

  7. Hemiequilibrium problems

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2004-01-01

    Full Text Available We consider a new class of equilibrium problems, known as hemiequilibrium problems. Using the auxiliary principle technique, we suggest and analyze a class of iterative algorithms for solving hemiequilibrium problems, the convergence of which requires either pseudomonotonicity or partially relaxed strong monotonicity. As a special case, we obtain a new method for hemivariational inequalities. Since hemiequilibrium problems include hemivariational inequalities and equilibrium problems as special cases, the results proved in this paper still hold for these problems.

  8. Estimated incidence of influenza-associated severe acute respiratory infections in Indonesia, 2013-2016.

    Science.gov (United States)

    Susilarini, Ni K; Haryanto, Edy; Praptiningsih, Catharina Y; Mangiri, Amalya; Kipuw, Natalie; Tarya, Irmawati; Rusli, Roselinda; Sumardi, Gestafiana; Widuri, Endang; Sembiring, Masri M; Noviyanti, Widya; Widaningrum, Christina; Lafond, Kathryn E; Samaan, Gina; Setiawaty, Vivi

    2018-01-01

    Indonesia's hospital-based Severe Acute Respiratory Infection (SARI) surveillance system, Surveilans Infeksi Saluran Pernafasan Akut Berat Indonesia (SIBI), was established in 2013. While respiratory illnesses such as SARI pose a significant problem, there are limited incidence-based data on influenza disease burden in Indonesia. This study aimed to estimate the incidence of influenza-associated SARI in Indonesia during 2013-2016 at three existing SIBI surveillance sites. From May 2013 to April 2016, inpatients from sentinel hospitals in three districts of Indonesia (Gunung Kidul, Balikpapan, Deli Serdang) were screened for SARI. Respiratory specimens were collected from eligible inpatients and screened for influenza viruses. Annual incidence rates were calculated using these SIBI-enrolled influenza-positive SARI cases as a numerator, with a denominator catchment population defined through hospital admission survey (HAS) to identify respiratory-coded admissions by age to hospitals in the sentinel site districts. From May 2013 to April 2016, there were 1527 SARI cases enrolled, of whom 1392 (91%) had specimens tested and 199 (14%) were influenza-positive. The overall estimated annual incidence of influenza-associated SARI ranged from 13 to 19 per 100 000 population. Incidence was highest in children aged 0-4 years (82-114 per 100 000 population), followed by children 5-14 years (22-36 per 100 000 population). Incidence rates of influenza-associated SARI in these districts indicate a substantial burden of influenza hospitalizations in young children in Indonesia. Further studies are needed to examine the influenza burden in other potential risk groups such as pregnant women and the elderly. © 2017 The Authors. Influenza and Other Respiratory Viruses. Published by John Wiley & Sons Ltd.

  9. Cosmology problems

    International Nuclear Information System (INIS)

    Lukash, V.N.

    1983-01-01

    Information discussed at the 18th General Assembly of the International Astronomical Union and Symposium on ''Early Universe Evolution and Its Modern Structure'' on the problems of relic radiation, Hubble expansion, spatial structure and physics of the early Universe is presented. The spectrum of relic radioemission differs but slightly from the equilibrium one in the maximum range. In G. Smith (USA) opinion such difference may be caused by any radiosources radiating in the same wave range. The absence of unanimous opinion of astronomers on Hubble constant value is pointed out. G.Tam-man (Switzerland) estimates the Hubble constant 50+-7 km/s. J. Voculer (USA) gives a twice greater value. Such divergence is ca sed by various methods of determining distances up to remote galaxies and galaxy clusters. Many reports deal with large-scale Universe structure. For the first time considered are the processes which occurred in the epoch at times about 10 -35 c from the beginning of the Universe expansion. Such possibility is presented by the theory of ''great unification'' which permits to explain some fundamental properties of the Universe: spatial uniformity of isotropic expansion, existence of small primary density perturbations

  10. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  11. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  12. Predictors of Problem Gambling in the U.S.

    Science.gov (United States)

    Welte, John W; Barnes, Grace M; Tidwell, Marie-Cecile O; Wieczorek, William F

    2017-06-01

    In this article we examine data from a national U.S. adult survey of gambling to determine correlates of problem gambling and discuss them in light of theories of the etiology of problem gambling. These include theories that focus on personality traits, irrational beliefs, anti-social tendencies, neighborhood influences and availability of gambling. Results show that males, persons in the 31-40 age range, blacks, and the least educated had the highest average problem gambling symptoms. Adults who lived in disadvantaged neighborhoods also had the most problem gambling symptoms. Those who attended religious services most often had the fewest problem gambling symptoms, regardless of religious denomination. Respondents who reported that it was most convenient for them to gamble had the highest average problem gambling symptoms, compared to those for whom gambling was less convenient. Likewise, adults with the personality traits of impulsiveness and depression had more problem gambling symptoms than those less impulsive or depressed. Respondents who had friends who approve of gambling had more problem gambling symptoms than those whose friends did not approve of gambling. The results for the demographic variables as well as for impulsiveness and religious attendance are consistent with an anti-social/impulsivist pathway to problem gambling. The results for depression are consistent with an emotionally vulnerable pathway to problem gambling.

  13. Explaining the Mind: Problems, Problems

    OpenAIRE

    Harnad, Stevan

    2001-01-01

    The mind/body problem is the feeling/function problem: How and why do feeling systems feel? The problem is not just "hard" but insoluble (unless one is ready to resort to telekinetic dualism). Fortunately, the "easy" problems of cognitive science (such as the how and why of categorization and language) are not insoluble. Five books (by Damasio, Edelman/Tononi...

  14. Some problems on Monte Carlo method development

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software

  15. Prostate Problems

    Science.gov (United States)

    ... know the exact cause of your prostate problem. Prostatitis The cause of prostatitis depends on whether you ... prostate problem in men older than age 50. Prostatitis If you have a UTI, you may be ...

  16. General problems

    International Nuclear Information System (INIS)

    2005-01-01

    This article presents the general problems as natural disasters, consequences of global climate change, public health, the danger of criminal actions, the availability to information about problems of environment

  17. Learning Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Learning Problems KidsHealth / For Kids / Learning Problems What's in ... for how to make it better. What Are Learning Disabilities? Learning disabilities aren't contagious, but they ...

  18. Ankle Problems

    Science.gov (United States)

    ... Read MoreDepression in Children and TeensRead MoreBMI Calculator Ankle ProblemsFollow this chart for more information about problems that can cause ankle pain. Our trusted Symptom Checker is written and ...

  19. 基于现金循环理论视角下的陕西省小面额人民币运行分析%The Analysis on the Small Denomination RMB Circulation under the Perspective of Cash Cycle Theory in Shaanxi Province

    Institute of Scientific and Technical Information of China (English)

    宋亮

    2014-01-01

    小面额人民币价值低,在流通流域超期服役,整洁度始终在低位徘徊,是长期制约现金服务质量的瓶颈。本文以现金四循环理论为分析视角,针对陕西省小面额人民币运行情况,逐一分析了陕西省小面额人民币循环特征,指出了管理中的薄弱环节,进而提出了提升陕西省小面额人民币流通服务管理水平的对策。%The value of small denomination RMB is low, the circulation period is more than the stipulated period, and its cleanli-ness is also at a low level, which is the bottleneck that restricts the cash service quality for a long time. As the point of view of cash cy-cle theory, the paper analyzes the characteristics of small denomination RMB circulation aiming at the situation of small denomination RMB operation, points out the weak links in the management, and puts forwards some corresponding countermeasures for promoting the service management level of small denomination RMB circulation in Shaanxi province.

  20. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Science.gov (United States)

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  1. ITOUGH2 sample problems

    International Nuclear Information System (INIS)

    Finsterle, S.

    1997-11-01

    This report contains a collection of ITOUGH2 sample problems. It complements the ITOUGH2 User's Guide [Finsterle, 1997a], and the ITOUGH2 Command Reference [Finsterle, 1997b]. ITOUGH2 is a program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis. It is based on the TOUGH2 simulator for non-isothermal multiphase flow in fractured and porous media [Preuss, 1987, 1991a]. The report ITOUGH2 User's Guide [Finsterle, 1997a] describes the inverse modeling framework and provides the theoretical background. The report ITOUGH2 Command Reference [Finsterle, 1997b] contains the syntax of all ITOUGH2 commands. This report describes a variety of sample problems solved by ITOUGH2. Table 1.1 contains a short description of the seven sample problems discussed in this report. The TOUGH2 equation-of-state (EOS) module that needs to be linked to ITOUGH2 is also indicated. Each sample problem focuses on a few selected issues shown in Table 1.2. ITOUGH2 input features and the usage of program options are described. Furthermore, interpretations of selected inverse modeling results are given. Problem 1 is a multipart tutorial, describing basic ITOUGH2 input files for the main ITOUGH2 application modes; no interpretation of results is given. Problem 2 focuses on non-uniqueness, residual analysis, and correlation structure. Problem 3 illustrates a variety of parameter and observation types, and describes parameter selection strategies. Problem 4 compares the performance of minimization algorithms and discusses model identification. Problem 5 explains how to set up a combined inversion of steady-state and transient data. Problem 6 provides a detailed residual and error analysis. Finally, Problem 7 illustrates how the estimation of model-related parameters may help compensate for errors in that model

  2. Sociale problemer

    DEFF Research Database (Denmark)

    Christensen, Anders Bøggild; Rasmussen, Tove; Bundesen, Peter Verner

    Sociale problemer kan betragtes som selve udgangspunktet for socialt arbejde, hvor ambitionen er at råde bod på problemerne og sikre, at udsatte borgere får en bedre tilværelse. Det betyder også, at diskussionen af sociale problemer er afgørende for den sociale grundfaglighed. I denne bog sætter en...... række fagfolk på tværs af det danske socialfaglige felt fokus på sociale problemer. Det diskuteres, hvad vi overhovedet forstår ved sociale problemer, hvordan de opstår, hvilke konsekvenser de har, og ikke mindst hvordan man som fagprofessionel håndterer sociale problemer i det daglige arbejde. Bogen er...... skrevet som lærebog til professionsuddannelser, hvor sociale problemer udgør en dimension, bl.a. socialrådgiver-, pædagog- og sygeplejerskeuddannelserne....

  3. The importance of implementation details and parameter settings in black-box optimization: a case study on Gaussian estimation-of-distribution algorithms and circles-in-a-square packing problems

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); Gallagher, M. (Marcus)

    2016-01-01

    textabstractWe consider a scalable problem that has strong ties with real-world problems, can be compactly formulated and efficiently evaluated, yet is not trivial to solve and has interesting characteristics that differ from most commonly used benchmark problems: packing n circles in a square

  4. The Guderley problem revisited

    International Nuclear Information System (INIS)

    Ramsey, Scott D.; Kamm, James R.; Bolstad, John H.

    2009-01-01

    The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.

  5. Hearing Problems

    Science.gov (United States)

    ... Read MoreDepression in Children and TeensRead MoreBMI Calculator Hearing ProblemsLoss in the ability to hear or discriminate ... This flow chart will help direct you if hearing loss is a problem for you or a ...

  6. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  7. Problem Posing

    OpenAIRE

    Šilhavá, Marie

    2009-01-01

    This diploma thesis concentrates on problem posing from the students' point of view. Problem posing can be either seen as a teaching method which can be used in the class, or it can be used as a tool for researchers or teachers to assess the level of students' understanding of the topic. In my research, I compare three classes, one mathematics specialist class and two generalist classes, in their ability of problem posing. As an assessment tool it seemed that mathemathics specialists were abl...

  8. Popular Problems

    DEFF Research Database (Denmark)

    Skovhus, Randi Boelskifte; Thomsen, Rie

    2017-01-01

    This article introduces a method to critical reviews and explores the ways in which problems have been formulated in knowledge production on career guidance in Denmark over a 10-year period from 2004 to 2014. The method draws upon the work of Bacchi focussing on the ‘What's the problem represented...... to be’ (WPR) approach. Forty-nine empirical studies on Danish youth career guidance were included in the study. An analysis of the issues in focus resulted in nine problem categories. One of these, ‘targeting’, is analysed using the WPR approach. Finally, the article concludes that the WPR approach...... provides a constructive basis for a critical analysis and discussion of the collective empirical knowledge production on career guidance, stimulating awareness of problems and potential solutions among the career guidance community....

  9. Sleep Problems

    Science.gov (United States)

    ... For Consumers Consumer Information by Audience For Women Sleep Problems Share Tweet Linkedin Pin it More sharing ... 101 KB) En Español Medicines to Help You Sleep Tips for Better Sleep Basic Facts about Sleep ...

  10. Mouth Problems

    Science.gov (United States)

    ... such as sores, are very common. Follow this chart for more information about mouth problems in adults. ... cancers. See your dentist if sharp or rough teeth or dental work are causing irritation. Start OverDiagnosisThis ...

  11. Kidney Problems

    Science.gov (United States)

    ... our e-newsletter! Aging & Health A to Z Kidney Problems Basic Facts & Information The kidneys are two ... kidney (renal) diseases are called nephrologists . What are Kidney Diseases? For about one-third of older people, ...

  12. Knapsack problems

    CERN Document Server

    Kellerer, Hans; Pisinger, David

    2004-01-01

    Thirteen years have passed since the seminal book on knapsack problems by Martello and Toth appeared. On this occasion a former colleague exclaimed back in 1990: "How can you write 250 pages on the knapsack problem?" Indeed, the definition of the knapsack problem is easily understood even by a non-expert who will not suspect the presence of challenging research topics in this area at the first glance. However, in the last decade a large number of research publications contributed new results for the knapsack problem in all areas of interest such as exact algorithms, heuristics and approximation schemes. Moreover, the extension of the knapsack problem to higher dimensions both in the number of constraints and in the num­ ber of knapsacks, as well as the modification of the problem structure concerning the available item set and the objective function, leads to a number of interesting variations of practical relevance which were the subject of intensive research during the last few years. Hence, two years ago ...

  13. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  14. An experimental case study to estimate Pre-harvest Wheat Acreage/Production in Hilly and Plain region of Uttarakhand state: Challenges and solutions of problems by using satellite data

    Science.gov (United States)

    Uniyal, D.; Kimothi, M. M.; Bhagya, N.; Ram, R. D.; Patel, N. K.; Dhaundiya, V. K.

    2014-11-01

    Wheat is an economically important Rabi crop for the state, which is grown on around 26 % of total available agriculture area in the state. There is a variation in productivity of wheat crop in hilly and tarai region. The agricultural productivity is less in hilly region in comparison of tarai region due to terrace cultivation, traditional system of agriculture, small land holdings, variation in physiography, top soil erosion, lack of proper irrigation system etc. Pre-harvest acreage/yield/production estimation of major crops is being done with the help of conventional crop cutting method, which is biased, inaccurate and time consuming. Remote Sensing data with multi-temporal and multi-spectral capabilities has shown new dimension in crop discrimination analysis and acreage/yield/production estimation in recent years. In view of this, Uttarakhand Space Applications Centre (USAC), Dehradun with the collaboration of Space Applications Centre (SAC), ISRO, Ahmedabad and Uttarakhand State Agriculture Department, have developed different techniques for the discrimination of crops and estimation of pre-harvest wheat acreage/yield/production. In the 1st phase, five districts (Dehradun, Almora, Udham Singh Nagar, Pauri Garhwal and Haridwar) with distinct physiography i.e. hilly and plain regions, have been selected for testing and verification of techniques using IRS (Indian Remote Sensing Satellites), LISS-III, LISS-IV satellite data of Rabi season for the year 2008-09 and whole 13 districts of the Uttarakhand state from 2009-14 along with ground data were used for detailed analysis. Five methods have been developed i.e. NDVI (Normalized Differential Vegetation Index), Supervised classification, Spatial modeling, Masking out method and Programming on visual basics methods using multitemporal satellite data of Rabi season along with the collateral and ground data. These methods were used for wheat discriminations and preharvest acreage estimations and subsequently results

  15. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  16. Calculus problems

    CERN Document Server

    Baronti, Marco; van der Putten, Robertus; Venturi, Irene

    2016-01-01

    This book, intended as a practical working guide for students in Engineering, Mathematics, Physics, or any other field where rigorous calculus is needed, includes 450 exercises. Each chapter starts with a summary of the main definitions and results, which is followed by a selection of solved exercises accompanied by brief, illustrative comments. A selection of problems with indicated solutions rounds out each chapter. A final chapter explores problems that are not designed with a single issue in mind but instead call for the combination of a variety of techniques, rounding out the book’s coverage. Though the book’s primary focus is on functions of one real variable, basic ordinary differential equations (separation of variables, linear first order and constant coefficients ODEs) are also discussed. The material is taken from actual written tests that have been delivered at the Engineering School of the University of Genoa. Literally thousands of students have worked on these problems, ensuring their real-...

  17. Problems over Information Systems

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The problems of estimation of the minimum average time complexity of decision trees and design of efficient algorithms are complex in general case. The upper bounds described in Chap. 2.4.3 can not be applied directly due to large computational complexity of the parameter M(z). Under reasonable assumptions about the relation of P and NP, there are no polynomial time algorithms with good approximation ratio [12, 32]. One of the possible solutions is to consider particular classes of problems and improve the existing results using characteristics of the considered classes. © Springer-Verlag Berlin Heidelberg 2011.

  18. Thyroid Problems

    Science.gov (United States)

    ... Home › Aging & Health A to Z › Thyroid Problems Font size A A A Print Share Glossary Basic ... enough thyroid hormone, usually of the thyroxine (T4) type of hormone. Your T4 levels can drop temporarily ...

  19. Balance Problems

    Science.gov (United States)

    ... fully trust your sense of balance. Loss of balance also raises the risk of falls. This is a serious and even life-threatening ... 65. Balance disorders are serious because of the risk of falls. But occasionally balance problems may warn of another health condition, such ...

  20. Complementarity problems

    CERN Document Server

    Isac, George

    1992-01-01

    The study of complementarity problems is now an interesting mathematical subject with many applications in optimization, game theory, stochastic optimal control, engineering, economics etc. This subject has deep relations with important domains of fundamental mathematics such as fixed point theory, ordered spaces, nonlinear analysis, topological degree, the study of variational inequalities and also with mathematical modeling and numerical analysis. Researchers and graduate students interested in mathematical modeling or nonlinear analysis will find here interesting and fascinating results.

  1. Agricultural problems

    International Nuclear Information System (INIS)

    Bickerton, George E.

    1997-01-01

    Although there were not reasons to deplore against major activity release from any of the 110 industrial reactors authorized to operate in US, the nuclear incident that occurred at the Three Mile Island Plant in 1979 urged the public conscience toward the necessity of readiness to cope with events of this type. The personnel of the Emergency Planning Office functioning in the frame of US Department of Agriculture has already participated in around 600 intervention drillings on a federal, local or state scale to plan, test or asses radiological emergency plans or to intervene locally. These exercises allowed acquiring a significant experience in elaborating emergency plans, planning the drillings, working out scenarios and evaluation of the potential impact of accidents from the agricultural point of view. We have also taken part in different international drillings among which the most recent are INEX 1 and RADEX 94. We have found on these occasions that the agricultural problems are essential preoccupations in most of the cases no matter if the context is international, national, local or of state level. The paper poses problems specifically related to milk, fruits and vegetables, soils, meat and meat products. Finally the paper discusses issues like drilling planning, alarm and notification, sampling strategy, access authorizations for farmers, removing of contamination wastes. A number of social, political and economical relating problems are also mentioned

  2. Radon problems

    International Nuclear Information System (INIS)

    Cohen, B.L.

    1985-01-01

    This chapter examines the health hazards resulting from the release of naturally occurring radioactive gas derived from the decay of uranium. It is estimated that random inhalation is now causing about 10,000 fatal lung cancers per year in the US. Radon is constantly being generated in rocks and soils (in which uranium is naturally present) and in materials produced from them (e.g., brick, stone, cement, plaster). It is emphasized that radon levels in buildings are typically 5 times higher than outdoors because radon diffusing up from the ground below or out of bricks, stone, cement, or plaster is trapped inside for a relatively long time

  3. Estimating Venezuelas Latent Inflation

    OpenAIRE

    Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo

    2011-01-01

    Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...

  4. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  5. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  6. FBST for Cointegration Problems

    Science.gov (United States)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  7. Source Estimation for the Damped Wave Equation Using Modulating Functions Method: Application to the Estimation of the Cerebral Blood Flow

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, a method based on modulating functions is proposed to estimate the Cerebral Blood Flow (CBF). The problem is written in an input estimation problem for a damped wave equation which is used to model the spatiotemporal variations

  8. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  9. Determination of scaling factors to estimate the radionuclide inventory in waste with low and intermediate-level activity from the IEA-R1 reactor

    International Nuclear Information System (INIS)

    Taddei, Maria Helena Tirollo

    2013-01-01

    Regulations regarding transfer and final disposal of radioactive waste require that the inventory of radionuclides for each container enclosing such waste must be estimated and declared. The regulatory limits are established as a function of the annual radiation doses that members of the public could be exposed to from the radioactive waste repository, which mainly depend on the activity concentration of radionuclides, given in Bq/g, found in each waste container. Most of the radionuclides that emit gamma-rays can have their activity concentrations determined straightforwardly by measurements carried out externally to the containers. However, radionuclides that emit exclusively alpha or beta particles, as well as gamma-rays or X-rays with low energy and low absolute emission intensity, or whose activity is very low among the radioactive waste, are generically designated as Difficult to Measure Nuclides (DTMs). The activity concentrations of these DTMs are determined by means of complex radiochemical procedures that involve isolating the chemical species being studied from the interference in the waste matrix. Moreover, samples must be collected from each container in order to perform the analyses inherent to the radiochemical procedures, which exposes operators to high levels of radiation and is very costly because of the large number of radioactive waste containers that need to be characterized at a nuclear facility. An alternative methodology to approach this problem consists in obtaining empirical correlations between some radionuclides that can be measured directly – such as 60 Co and 137 Cs, therefore designated as Key Nuclides (KNs) – and the DTMs. This methodology, denominated Scaling Factor, was applied in the scope of the present work in order to obtain Scaling Factors or Correlation Functions for the most important radioactive wastes with low and intermediate-activity level from the IEA-R1 nuclear research reactor. (author)

  10. Research Problems Associated with Limiting the Applied Force in Vibration Tests and Conducting Base-Drive Modal Vibration Tests

    Science.gov (United States)

    Scharton, Terry D.

    1995-01-01

    The intent of this paper is to make a case for developing and conducting vibration tests which are both realistic and practical (a question of tailoring versus standards). Tests are essential for finding things overlooked in the analyses. The best test is often the most realistic test which can be conducted within the cost and budget constraints. Some standards are essential, but the author believes more in the individual's ingenuity to solve a specific problem than in the application of standards which reduce problems (and technology) to their lowest common denominator. Force limited vibration tests and base-drive modal tests are two examples of realistic, but practical testing approaches. Since both of these approaches are relatively new, a number of interesting research problems exist, and these are emphasized herein.

  11. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  12. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  13. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  14. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  15. Assessing the performance of dynamical trajectory estimates

    Science.gov (United States)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  16. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  17. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  18. Anisotropic Density Estimation in Global Illumination

    DEFF Research Database (Denmark)

    Schjøth, Lars

    2009-01-01

    Density estimation employed in multi-pass global illumination algorithms gives cause to a trade-off problem between bias and noise. The problem is seen most evident as blurring of strong illumination features. This thesis addresses the problem, presenting four methods that reduce both noise...

  19. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  20. Efficacy of calf:cow ratios for estimating calf production of arctic caribou

    Science.gov (United States)

    Cameron, R.D.; Griffith, B.; Parrett, L.S.; White, R.G.

    2013-01-01

    Caribou (Rangifer tarandus granti) calf:cow ratios (CCR) computed from composition counts obtained on arctic calving grounds are biased estimators of net calf production (NCP, the product of parturition rate and early calf survival) for sexually-mature females. Sexually-immature 2-year-old females, which are indistinguishable from sexually-mature females without calves, are included in the denominator, thereby biasing the calculated ratio low. This underestimate increases with the proportion of 2-year-old females in the population. We estimated the magnitude of this error with deterministic simulations under three scenarios of calf and yearling annual survival (respectively: low, 60 and 70%; medium, 70 and 80%; high, 80 and 90%) for five levels of unbiased NCP: 20, 40, 60, 80, and 100%. We assumed a survival rate of 90% for both 2-year-old and mature females. For each NCP, we computed numbers of 2-year-old females surviving annually and increased the denominator of CCR accordingly. We then calculated a series of hypothetical “observed” CCRs, which stabilized during the last 6 years of the simulations, and documented the degree to which each 6-year mean CCR differed from the corresponding NCP. For the three calf and yearling survival scenarios, proportional underestimates of NCP by CCR ranged 0.046–0.156, 0.058–0.187, and 0.071–0.216, respectively. Unfortunately, because parturition and survival rates are typically variable (i.e., age distribution is unstable), the magnitude of the error is not predictable without substantial supporting information. We recommend maintaining a sufficient sample of known-age radiocollared females in each herd and implementing a regular relocation schedule during the calving period to obtain unbiased estimates of both parturition rate and NCP.

  1. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    Science.gov (United States)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  2. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  3. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  4. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  5. Estimations of actual availability

    International Nuclear Information System (INIS)

    Molan, M.; Molan, G.

    2001-01-01

    Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)

  6. a comparative study of some robust ridge and liu estimators

    African Journals Online (AJOL)

    Dr A.B.Ahmed

    estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S. Estimators, are preferred. To handle these two problems jointly, the study combines the Ridge and Liu Estimators with Robust.

  7. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  8. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  9. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  10. Class and Home Problems: Optimization Problems

    Science.gov (United States)

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  11. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  12. Depandent samples in empirical estimation of stochastic programming problems

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta; Houda, Michal

    2006-01-01

    Roč. 35, 2/3 (2006), s. 271-279 ISSN 1026-597X R&D Projects: GA ČR GA402/04/1294; GA ČR GD402/03/H057; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * stability * probability metrics * Wasserstein metric * Kolmogorov metric * simulations Subject RIV: BB - Applied Statistics , Operational Research

  13. The problem of low thermoluminescence age estimates in geological dating

    International Nuclear Information System (INIS)

    Nambi, K.S.V.

    1983-01-01

    A systematic underestimate of the geological age by the TL technique has been observed in a variety of CaCO 3 samples of Quaternary to Precambrian ages. It is concluded that the TL dating clock in the CaCO 3 lattice stops when the alpha palaeodose =alpha (rad a -1 )x geological age (a) reaches about 100,000 rad. At this dose the natural thermoluminescence reaches perhaps a dynamic equilibrium level determined solely by the alpha activity of the sample. There are indications that the limiting alpha palaeodose beyond which TL dating is invalid is more or less the same for CaSO 4 and silicate samples, and it is convenient to note a limiting value of 3 million for the product of alpha activity (cph from 13.86 cm 2 ) and geological age (a). (author)

  14. Problems in estimating the value of household work

    OpenAIRE

    Villota Villota, Francisco

    1989-01-01

    The quantitative description of social systems is a broad and complicated task. Value judgements play as an important a role as the analytical aspects and the more purely statistical ones. The present state of social accounting reflects the predominance of economic ideology, that is to say it selects and emphasises those activities that are incorporated in a material and/or salable product. The 'social product' of Classic Economists or the national income of Marshall, was the key concept arou...

  15. Semiconductor failure threshold estimation problem in electromagnetic assessment

    International Nuclear Information System (INIS)

    Enlow, E.W.; Wunsch, D.C.

    1984-01-01

    Present semiconductor failure models to predict the one-microsecond square-wave power failure level for use with system electromagnetic (EM) assessments and hardening design are incomplete. This is because for a majority of device types there is insufficient data readily available in a composite data source to quantify the model parameters and the inaccuracy of the models cause complications in definition of adequate hardness margins and quantification of EM performance. This paper presents new semiconductor failure models which use a generic approach that are an integration and simplification of many present models. This generic approach uses two categorical models: one for diodes and transistors, and one for integrated circuits. The models were constructed from a large database of semiconductor failure data. The approach used for constructing diode and transistor failure level models is based on device rated power and are simple to use and universally applicable. The model predicts the value of the 1 μ second failure power to be used in the power failure models P = Kt /SUP -1/2/ or P = K 1 t -1 + K 2 t /SUP -1/2/ + K 3

  16. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  17. The statistics of Pearce element diagrams and the Chayes closure problem

    Science.gov (United States)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of

  18. Waring's Problem and the Circle Method

    Indian Academy of Sciences (India)

    other interests include classical music and mountaineering. the problems they worked on. Their proof of a slightly. Keywords weaker form of Ramanujan's original formula was pub-. Waring's problem, circle method, ... arc is fairly simple while it is the minor arc estimation that accounts for the 'major' amount of work involved!

  19. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  20. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  1. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  2. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  3. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  4. VERTICAL ACTIVITY ESTIMATION USING 2D RADAR

    African Journals Online (AJOL)

    hennie

    estimates on aircraft vertical behaviour from a single 2D radar track. ... Fortunately, the problem of detecting relative vertical motion using a single 2D ..... awareness tools in scenarios where aerial activity sensing is typically limited to 2D.

  5. Preventing Diabetes Problems

    Science.gov (United States)

    ... Problems Diabetes, Sexual, & Bladder Problems Clinical Trials Preventing Diabetes Problems View or Print All Sections Heart Disease & ... to help control symptoms and restore intimacy. Depression & Diabetes Depression is common among people with a chronic, ...

  6. The Chicken Problem.

    Science.gov (United States)

    Reeves, Charles A.

    2000-01-01

    Uses the chicken problem for sixth grade students to scratch the surface of systems of equations using intuitive approaches. Provides students responses to the problem and suggests similar problems for extensions. (ASK)

  7. Problems in differential equations

    CERN Document Server

    Brenner, J L

    2013-01-01

    More than 900 problems and answers explore applications of differential equations to vibrations, electrical engineering, mechanics, and physics. Problem types include both routine and nonroutine, and stars indicate advanced problems. 1963 edition.

  8. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  9. Sparse DOA estimation with polynomial rooting

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren

    2015-01-01

    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  10. Estimations of the relationship a dimensional budyko in Colombia

    International Nuclear Information System (INIS)

    Arias GomezPaula Andrea; Poveda Jaramillo, German

    2007-01-01

    Water and energy budgets in river basins are a condition for terrain forms growing and spatial distribution and productivity of vegetation. The Budyko non-dimensional number is defined as the relationship between mean annual precipitation (P) and mean annual evapotranspiration (PET), B = P/PET, in river basins. this non-dimensional number is defined as the ratio between available water (P) and available energy (PET), and has been employed for identifying water storage at vegetation, dryness, and net primary production in ecosystems. Literature reports have found that at B = 1 condition, denominated Budyko critical condition, B c , there exist particular climate, geomorphologic and biodiversity conditions which make that this number becomes from particular interest on hydro climatology and ecology fields. Budyko number maps with a spatial resolution of 5 arcmin cell size for Colombia extent, and other with 30 seconds cell size for Antioquia extent are presented. Many non-direct methods for potential evapotranspiration estimation have been employed. It concludes that Colombia is characterized by energy limited vegetation

  11. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  12. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  13. Practical global oceanic state estimation

    Science.gov (United States)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  14. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...

  15. Diagnosing plant problems

    Science.gov (United States)

    Cheryl A. Smith

    2008-01-01

    Diagnosing Christmas tree problems can be a challenge, requiring a basic knowledge of plant culture and physiology, the effect of environmental influences on plant health, and the ability to identify the possible causes of plant problems. Developing a solution or remedy to the problem depends on a proper diagnosis, a process that requires recognition of a problem and...

  16. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  17. Islamic Education Research Problem

    Directory of Open Access Journals (Sweden)

    Abdul Muthalib

    2012-04-01

    Full Text Available This paper will discuss Islamic educational studies that is reviewing how to find, limit and define problems and problem-solving concepts. The central question of this paper is to describe how to solve the problem in Islamic educational research. A researcher or educator who has the knowledge, expertise, or special interest on education for example is usually having a sensitivity to issues relating to educational research. In the research dimension of religious education, there are three types of problems, namely: Problems foundation, structural problems and operational issues. In doing research in Islamic education someone should understand research problem, limiting and formulating the problem, how to solve the problem, other problem relating to the point of research, and research approach.

  18. Generalized shrunken type-GM estimator and its application

    International Nuclear Information System (INIS)

    Ma, C Z; Du, Y L

    2014-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points

  19. Generalized shrunken type-GM estimator and its application

    Science.gov (United States)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  20. Work related injuries: estimating the incidence among illegally employed immigrants

    Directory of Open Access Journals (Sweden)

    Fadda Emanuela

    2010-12-01

    Full Text Available Abstract Background Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. Findings The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy; and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. Conclusions The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL probably underestimates the incidence of these accidents in Italy.

  1. Work related injuries: estimating the incidence among illegally employed immigrants.

    Science.gov (United States)

    Mastrangelo, Giuseppe; Rylander, Ragnar; Buja, Alessandra; Marangi, Gianluca; Fadda, Emanuela; Fedeli, Ugo; Cegolon, Luca

    2010-12-08

    Statistics on occupational accidents are based on data from registered employees. With the increasing number of immigrants employed illegally and/or without regular working visas in many developed countries, it is of interest to estimate the injury rate among such unregistered workers. The current study was conducted in an area of North-Eastern Italy. The sources of information employed in the present study were the Accidents and Emergencies records of a hospital; the population data on foreign-born residents in the hospital catchment area (Health Care District 4, Primary Care Trust 20, Province of Verona, Veneto Region, North-Eastern Italy); and the estimated proportion of illegally employed workers in representative samples from the Province of Verona and the Veneto Region. Of the 419 A&E records collected between January and December 2004 among non European Union (non-EU) immigrants, 146 aroused suspicion by reporting the home, rather than the workplace, as the site of the accident. These cases were the numerator of the rate. The number of illegally employed non-EU workers, denominator of the rate, was estimated according to different assumptions and ranged from between 537 to 1,338 individuals. The corresponding rates varied from 109.1 to 271.8 per 1,000 non-EU illegal employees, against 65 per 1,000 reported in Italy in 2004. The results of this study suggest that there is an unrecorded burden of illegally employed immigrants suffering from work related injuries. Additional efforts for prevention of injuries in the workplace are required to decrease this number. It can be concluded that the Italian National Institute for the Insurance of Work Related Injuries (INAIL) probably underestimates the incidence of these accidents in Italy.

  2. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  3. Properties of estimated characteristic roots

    OpenAIRE

    Bent Nielsen; Heino Bohn Nielsen

    2008-01-01

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the δ-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...

  4. Combining four Monte Carlo estimators for radiation momentum deposition

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Urbatsch, Todd J.

    2011-01-01

    Using four distinct Monte Carlo estimators for momentum deposition - analog, absorption, collision, and track-length estimators - we compute a combined estimator. In the wide range of problems tested, the combined estimator always has a figure of merit (FOM) equal to or better than the other estimators. In some instances the FOM of the combined estimator is only a few percent higher than the FOM of the best solo estimator, the track-length estimator, while in one instance it is better by a factor of 2.5. Over the majority of configurations, the combined estimator's FOM is 10 - 20% greater than any of the solo estimators' FOM. The numerical results show that the track-length estimator is the most important term in computing the combined estimator, followed far behind by the analog estimator. The absorption and collision estimators make negligible contributions. (author)

  5. Covariance expressions for eigenvalue and eigenvector problems

    Science.gov (United States)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  6. The Markov moment problem and extremal problems

    CERN Document Server

    Kreĭn, M G; Louvish, D

    1977-01-01

    In this book, an extensive circle of questions originating in the classical work of P. L. Chebyshev and A. A. Markov is considered from the more modern point of view. It is shown how results and methods of the generalized moment problem are interlaced with various questions of the geometry of convex bodies, algebra, and function theory. From this standpoint, the structure of convex and conical hulls of curves is studied in detail and isoperimetric inequalities for convex hulls are established; a theory of orthogonal and quasiorthogonal polynomials is constructed; problems on limiting values of integrals and on least deviating functions (in various metrics) are generalized and solved; problems in approximation theory and interpolation and extrapolation in various function classes (analytic, absolutely monotone, almost periodic, etc.) are solved, as well as certain problems in optimal control of linear objects.

  7. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  8. A literature review of expert problem solving using analogy

    OpenAIRE

    Mair, C; Martincova, M; Shepperd, MJ

    2009-01-01

    We consider software project cost estimation from a problem solving perspective. Taking a cognitive psychological approach, we argue that the algorithmic basis for CBR tools is not representative of human problem solving and this mismatch could account for inconsistent results. We describe the fundamentals of problem solving, focusing on experts solving ill-defined problems. This is supplemented by a systematic literature review of empirical studies of expert problem solving of non-trivial pr...

  9. Differential equations problem solver

    CERN Document Server

    Arterburn, David R

    2012-01-01

    REA's Problem Solvers is a series of useful, practical, and informative study guides. Each title in the series is complete step-by-step solution guide. The Differential Equations Problem Solver enables students to solve difficult problems by showing them step-by-step solutions to Differential Equations problems. The Problem Solvers cover material ranging from the elementary to the advanced and make excellent review books and textbook companions. They're perfect for undergraduate and graduate studies.The Differential Equations Problem Solver is the perfect resource for any class, any exam, and

  10. EL COOPERATIVISMO VITIVINíCOLA EN LA UNIóN EUROPEA Y ESPAñA. UN ESTUDIO EXPLORATORIO EN LA DENOMINACIóN DE ORIGEN DE ALICANTE/THE WINE GROWING COOPERATIVISM IN THE EUROPEAN UNION AND SPAIN. AN EXPLORATORY STUDY IN THE ORIGIN DENOMINATION OF ALICANTE

    Directory of Open Access Journals (Sweden)

    Amparo MELIÁN NAVARRO

    2007-09-01

    Full Text Available El cooperativismo vitivinícola es una importante realidad en los países de la Unión Europea, sobre todo en Francia e Italia donde se ubican las principales bodegas cooperativas europeas. En este trabajo se efectúa una caracterización del cooperativismo vitivinícola en la Unión Europea y España, con especial interés en una zona geográfica determinada, la correspondiente a la Denominación de Origen (D.O. Alicante, donde se realiza un estudio exploratorio a nivel de significación y representatividad de las bodegas cooperativas frente al total de empresas vitivinícolas (S.A., S.L. y empresas particulares en las principales magnitudes de producción y comercialización. Por otra parte se presenta un estudio empírico, centrado en un análisis bivariante basado en una encuesta realizada a las bodegas de la D.O. Alicante durante el periodo de marzo a junio de 2007, con la finalidad de conocer el sector desde la perspectiva de la oferta./The wine growing cooperativism is an important fact in the European countries, overall in France and Italy where the main European cooperative wine cellars. In this study a portrayal of cooperativism in the European Union and Spain, with special interest in a concrete geographic area, the one referred to the Origin Denomination of Alicante where a exploratory study in a level of signification and representativeness in the cooperative wine cellars in opposition to the wine growing companies (S.A. and S.L. and private companies in the main magnitudes of production and marketing. On the other hand, an empiric study is presented, focused on a bivariant analysis based on a survey carried out in the wine cellars of the Origin Denomination of Alicante for a period from March to June of 2007, with the aim of knowing the sector from an offer perspective.

  11. Radon: A health problem and a communication problem

    International Nuclear Information System (INIS)

    Johnson, R.H.

    1992-01-01

    The US Environmental Protection Agency (USEPA) is making great efforts to alert the American public to the potential health risks of radon in homes. The news media have widely publicized radon as a problem; state and local governments are responding to public alarms; and hundreds of radon open-quotes expertsclose quotes are now offering radon detection and mitigation services. Apparently, USEPA's communication program is working, and the public is becoming increasingly concerned with radon. But are they concerned with radon as a open-quotes healthclose quotes problem in the way USEPA intended? The answer is yes, partly. More and more, however, the concerns are about home resale values. Many homebuyers are now deciding whether to buy on the basis of a single radon screening measurement, comparing it with USEPA's action guide of 4 pCi L -1 . They often conclude that 3.9 is OK, but 4.1 is not. Here is where the communication problems begin. The public largely misunderstands the significance of USEPA's guidelines and the meaning of screening measurements. Seldom does anyone inquire about the quality of the measurements, or the results of USEPA performance testing? Who asks about the uncertainty of lifetime exposure assessments based on a 1-hour, 1-day, 3-day, or even 30-day measurement? Who asks about the uncertainty of USEPA's risk estimates? Fortunately, an increasing number of radiation protection professions are asking such questions. They find that USEPA's risk projections are based on many assumptions which warrant further evaluation, particularly with regard to the combined risks of radon and cigarette-smoking. This is the next communication problem. What are these radiation professions doing to understand the bases for radon health-risk projections? Who is willing to communicate a balanced perspective to the public? Who is willing to communicate the uncertainty and conservatism in radon measurements and risk estimates?

  12. Side Effects: Sleep Problems

    Science.gov (United States)

    Sleep problems are a common side effect during cancer treatment. Learn how a polysomnogram can assess sleep problems. Learn about the benefits of managing sleep disorders in men and women with cancer.

  13. The internal percolation problem

    International Nuclear Information System (INIS)

    Bezsudnov, I.V.; Snarskii, A.A.

    2010-01-01

    The internal percolation problem (IP) as a new type of the percolation problem is introduced and investigated. In spite of the usual (or external) percolation problem (EP) when the percolation current flows from the top to the bottom of the system, in IP case the voltage is applied through bars which are present in the hole located within the system. The EP problem has two major parameters: M-size of the system and a 0 -size of inclusions, bond size, etc. The IP problem holds one parameter more: size of the hole L. Numerical simulation shows that the critical indexes of conductance for the IP problem are very close to those in the EP problem. On the contrary, the indexes of the relative spectral noise density of 1/f noise and higher moments differ from those in the EP problem. The basics of these facts is discussed.

  14. Challenging problems in algebra

    CERN Document Server

    Posamentier, Alfred S

    1996-01-01

    Over 300 unusual problems, ranging from easy to difficult, involving equations and inequalities, Diophantine equations, number theory, quadratic equations, logarithms, more. Detailed solutions, as well as brief answers, for all problems are provided.

  15. Study the Problem.

    Science.gov (United States)

    Choate, Joyce S.

    1990-01-01

    The initial step of a strategic process for solving mathematical problems, "studying the question," is discussed. A lesson plan for teaching students to identify and revise arithmetic problems is presented, involving directed instruction and supervised practice. (JDD)

  16. German standard problem No. 2

    International Nuclear Information System (INIS)

    Burkhardt, R.

    1980-02-01

    The German Standard Problem Nr. 2 (primary circuits) is meant to check whether the presently available computing programs dealing with ECCS problems are suitable to reflect with sufficient accuracy reload and flooding processes. Changing from conventional calculation methods to the ''best-estimate'' method requires for possibility of exact comparison, as is the case here because of experimental results from the primary circuit test plant. The test plant of KWU Erlangen with primary circuit modeups on a 1:134 scale with exact level indications allows comparative testing where emergency cooling water is loaded into the system filled with saturated steam over cold lanes, or rather over the annulus modeup. The report on hand goes into detail about calculations, anticipated results and their comparison to experimental results. (orig./RW) [de

  17. Cosmological constant problem

    International Nuclear Information System (INIS)

    Weinberg, S.

    1989-01-01

    Cosmological constant problem is discussed. History of the problem is briefly considered. Five different approaches to solution of the problem are described: supersymmetry, supergravity, superstring; anthropic approach; mechamism of lagrangian alignment; modification of gravitation theory and quantum cosmology. It is noted that approach, based on quantum cosmology is the most promising one

  18. The Complete Problem Solver.

    Science.gov (United States)

    Hayes, John R.

    This book, designed for a college course on general problem-solving skills, focuses on skills that can be used by anyone in solving problems that occur in everyday life. Part I considers theory and practice: understanding problems, search, and protocol analysis. Part II discusses memory and knowledge acquisition: the structure of human memory,…

  19. The rational complementarity problem

    NARCIS (Netherlands)

    Heemels, W.P.M.H.; Schumacher, J.M.; Weiland, S.

    1999-01-01

    An extension of the linear complementarity problem (LCP) of mathematical programming is the so-called rational complementarity problem (RCP). This problem occurs if complementarity conditions are imposed on input and output variables of linear dynamical input/state/output systems. The resulting

  20. The triangle scheduling problem

    NARCIS (Netherlands)

    Dürr, Christoph; Hanzálek, Zdeněk; Konrad, Christian; Seddik, Yasmina; Sitters, R.A.; Vásquez, Óscar C.; Woeginger, Gerhard

    2017-01-01

    This paper introduces a novel scheduling problem, where jobs occupy a triangular shape on the time line. This problem is motivated by scheduling jobs with different criticality levels. A measure is introduced, namely the binary tree ratio. It is shown that the Greedy algorithm solves the problem to

  1. Pollution problems plague Poland

    International Nuclear Information System (INIS)

    Bajsarowicz, J.F.

    1989-01-01

    Poland's environmental problems are said to stem from investments in heavy industries that require enormous quantities of power and from the exploitation of two key natural resources: coal and sulfur. Air and water pollution problems and related public health problems are discussed

  2. Classifying IS Project Problems

    DEFF Research Database (Denmark)

    Munk-Madsen, Andreas

    2006-01-01

    The literature contains many lists of IS project problems, often in the form of risk factors. The problems sometimes appear unordered and overlapping, which reduces their usefulness to practitioners as well as theoreticians. This paper proposes a list of criteria for formulating project problems...

  3. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  4. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  5. Determination of scaling factors to estimate the radionuclide inventory in waste with low and intermediate-level activity from the IEA-R1 reactor; Determinacao de fatores de escala para estimativa do inventario de radionuclideos em rejeitos de media e baixa atividades do reator IEA-R1

    Energy Technology Data Exchange (ETDEWEB)

    Taddei, Maria Helena Tirollo

    2013-07-01

    Regulations regarding transfer and final disposal of radioactive waste require that the inventory of radionuclides for each container enclosing such waste must be estimated and declared. The regulatory limits are established as a function of the annual radiation doses that members of the public could be exposed to from the radioactive waste repository, which mainly depend on the activity concentration of radionuclides, given in Bq/g, found in each waste container. Most of the radionuclides that emit gamma-rays can have their activity concentrations determined straightforwardly by measurements carried out externally to the containers. However, radionuclides that emit exclusively alpha or beta particles, as well as gamma-rays or X-rays with low energy and low absolute emission intensity, or whose activity is very low among the radioactive waste, are generically designated as Difficult to Measure Nuclides (DTMs). The activity concentrations of these DTMs are determined by means of complex radiochemical procedures that involve isolating the chemical species being studied from the interference in the waste matrix. Moreover, samples must be collected from each container in order to perform the analyses inherent to the radiochemical procedures, which exposes operators to high levels of radiation and is very costly because of the large number of radioactive waste containers that need to be characterized at a nuclear facility. An alternative methodology to approach this problem consists in obtaining empirical correlations between some radionuclides that can be measured directly – such as {sup 60}Co and {sup 137}Cs, therefore designated as Key Nuclides (KNs) – and the DTMs. This methodology, denominated Scaling Factor, was applied in the scope of the present work in order to obtain Scaling Factors or Correlation Functions for the most important radioactive wastes with low and intermediate-activity level from the IEA-R1 nuclear research reactor. (author)

  6. Combining Facial Dynamics With Appearance for Age Estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Alnajar, F.; Salah, A.A.; Gevers, T.

    2015-01-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We

  7. Inverse problems for the Boussinesq system

    International Nuclear Information System (INIS)

    Fan, Jishan; Jiang, Yu; Nakamura, Gen

    2009-01-01

    We obtain two results on inverse problems for a 2D Boussinesq system. One is that we prove the Lipschitz stability for the inverse source problem of identifying a time-independent external force in the system with observation data in an arbitrary sub-domain over a time interval of the velocity and the data of velocity and temperature at a fixed positive time t 0 > 0 over the whole spatial domain. The other one is that we prove a conditional stability estimate for an inverse problem of identifying the two initial conditions with a single observation on a sub-domain

  8. A Note on optimal estimation in the presence of outliers

    Directory of Open Access Journals (Sweden)

    John N. Haddad

    2017-06-01

    Full Text Available Haddad, J. 2017. A Note on optimal estimation in the presence of outliers. Lebanese Science Journal, 18(1: 136-141. The basic estimation problem of the mean and standard deviation of a random normal process in the presence of an outlying observation is considered. The value of the outlier is taken as a constraint imposed on the maximization problem of the log likelihood. It is shown that the optimal solution of the maximization problem exists and expressions for the estimates are given. Applications to estimation in the presence of outliers and outlier detection are discussed and illustrated through a simulation study and analysis of trade data

  9. Semidefinite linear complementarity problems

    International Nuclear Information System (INIS)

    Eckhardt, U.

    1978-04-01

    Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de

  10. Matrix interdiction problem

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory

    2010-01-01

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.

  11. Creativity for Problem Solvers

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    2009-01-01

    This paper presents some modern and interdisciplinary concepts about creativity and creative processes specially related to problem solving. Central publications related to the theme are briefly reviewed. Creative tools and approaches suitable to support problem solving are also presented. Finally......, the paper outlines the author’s experiences using creative tools and approaches to: Facilitation of problem solving processes, strategy development in organisations, design of optimisation systems for large scale and complex logistic systems, and creative design of software optimisation for complex non...

  12. The stochastic goodwill problem

    OpenAIRE

    Marinelli, Carlo

    2003-01-01

    Stochastic control problems related to optimal advertising under uncertainty are considered. In particular, we determine the optimal strategies for the problem of maximizing the utility of goodwill at launch time and minimizing the disutility of a stream of advertising costs that extends until the launch time for some classes of stochastic perturbations of the classical Nerlove-Arrow dynamics. We also consider some generalizations such as problems with constrained budget and with discretionar...

  13. The pear thrips problem

    Science.gov (United States)

    Bruce L. Parker

    1991-01-01

    As entomologists, we sometimes like to think of an insect pest problem as simply a problem with an insect and its host. Our jobs would be much easier if that were the case, but of course, it is never that simple. There are many other factors besides the insect, and each one must be fully considered to understand the problem and develop effective management solutions....

  14. Mobile robot motion estimation using Hough transform

    Science.gov (United States)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  15. THE PROBLEM OF SUPPLIER

    OpenAIRE

    Raffo Lecca, Eduardo

    2014-01-01

    This is a famous problem from the annals of literature in operations research. G. Dantzig in [1] refers to W.W. Jacobs with his paper "The Caterer Problem" Nav. Log Res. Quart. 1 1954; as well as Gaddum, Hoffman and Sokolowsky "On the Solution of the Caterer Problem" Naval Res Logist. Quart., Vol.1, No. 3, september, 1954, and William Prager "On the Caterer Problem" of Management Sci, Vol 3, No. 1 october 1956 and Management Sci, Vol 3, No. 2 january 1957. Subsequently both G. Hadley presents...

  16. The Problem of Evil

    OpenAIRE

    Araki,Naoki

    2018-01-01

    The Problem of Evil has been discussed as one of the major problems in monotheism. “Why does Almighty God allow evil to exist?” Various solutions to this problem have been proposed, including the Free Will Defence. But none of them is convincing. The Problem of Evil has an assumption, which is that God exists. One of the proofs of God’s existence is René Descartes’s Ontological Argument. But none of them is persuasive. Every logic has its own assumption, which needs to be verified. So this pr...

  17. Numerical problems in physics

    CERN Document Server

    Singh, Devraj

    2015-01-01

    Numerical Problems in Physics, Volume 1 is intended to serve the need of the students pursuing graduate and post graduate courses in universities with Physics and Materials Science as subject including those appearing in engineering, medical, and civil services entrance examinations. KEY FEATURES: * 29 chapters on Optics, Wave & Oscillations, Electromagnetic Field Theory, Solid State Physics & Modern Physics * 540 solved numerical problems of various universities and ompetitive examinations * 523 multiple choice questions for quick and clear understanding of subject matter * 567 unsolved numerical problems for grasping concepts of the various topic in Physics * 49 Figures for understanding problems and concept

  18. Simon on problem solving

    DEFF Research Database (Denmark)

    Foss, Kirsten; Foss, Nicolai Juul

    2006-01-01

    as a general approach to problem solving. We apply these Simonian ideas to organisational issues, specifically new organisational forms. Specifically, Simonian ideas allow us to develop a morphology of new organisational forms and to point to some design problems that characterise these forms.......Two of Herbert Simon's best-known papers are 'The Architecture of Complexity' and 'The Structure of Ill-Structured Problems.' We discuss the neglected links between these two papers, highlighting the role of decomposition in the context of problems on which constraints have been imposed...

  19. On Euler's problem

    International Nuclear Information System (INIS)

    Egorov, Yurii V

    2013-01-01

    We consider the classical problem on the tallest column which was posed by Euler in 1757. Bernoulli-Euler theory serves today as the basis for the design of high buildings. This problem is reduced to the problem of finding the potential for the Sturm-Liouville equation corresponding to the maximum of the first eigenvalue. The problem has been studied by many mathematicians but we give the first rigorous proof of the existence and uniqueness of the optimal column and we give new formulae which let us find it. Our method is based on a new approach consisting in the study of critical points of a related nonlinear functional. Bibliography: 6 titles.

  20. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  1. Art as metontological problem

    Directory of Open Access Journals (Sweden)

    Radovanović Saša Ž.

    2014-01-01

    Full Text Available The author explains the link between fundamental ontology and metontology in Heidegger's thought. In this context, he raises the question about art as a metontological problem. Then he goes to show that the problem of metontology stems from imanent transformation of fundamental ontology. In this sense, two aspects of the problem of existence assume relevance, namely, universality and radicalism. He draws the conclusion that metontology and art as its problem, as opposed to fundamental ontology, were not integrated into Heidegger's later thought.

  2. Combinatorial problems and exercises

    CERN Document Server

    Lovász, László

    2007-01-01

    The main purpose of this book is to provide help in learning existing techniques in combinatorics. The most effective way of learning such techniques is to solve exercises and problems. This book presents all the material in the form of problems and series of problems (apart from some general comments at the beginning of each chapter). In the second part, a hint is given for each exercise, which contains the main idea necessary for the solution, but allows the reader to practice the techniques by completing the proof. In the third part, a full solution is provided for each problem. This book w

  3. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  4. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  5. EDITORIAL: Inverse Problems in Engineering

    Science.gov (United States)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  6. New technologies - new corrosion problems

    International Nuclear Information System (INIS)

    Heitz, E.

    1994-01-01

    Adequate resistance of materials to corrosion is equally important for classical and for new technologies. This article considers the economic consequences of corrosion damage and, in addition to the long-known GNP orientation, presents a new approach to the estimation of the costs of corrosion and corrosion protection via maintenance and especially corrosion-related maintenance. The significance of ''high-tech'', ''medium-tech'' and ''low-tech'' material and corrosion problems is assessed. Selected examples taken from new technologies in the areas of power engineering, environmental engineering, chemical engineering, and biotechnology demonstrate the great significance of the problems. It is concluded that corrosion research and corrosion prevention technology will never come to an end but will constantly face new problems. Two technologies are of particular interest since they focus attention on new methods of investigation: microelectronics and final disposal of radioactive wastes. The article closes by considering the importance of the transfer of experience and technology. Since the manufacturs and operators of machines and plant do not generally have access to the very latest knowledge, they should be kept informed through advisory services, experimental studies, databases, and further education. (orig.) [de

  7. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  8. $L^2$ estimates for the $\\bar \\partial$ operator

    OpenAIRE

    McNeal, Jeffery D.; Varolin, Dror

    2015-01-01

    This is a survey article about $L^2$ estimates for the $\\bar \\partial$ operator. After a review of the basic approach that has come to be called the "Bochner-Kodaira Technique", the focus is on twisted techniques and their applications to estimates for $\\bar \\partial$, to $L^2$ extension theorems, and to other problems in complex analysis and geometry, including invariant metric estimates and the $\\bar \\partial$-Neumann Problem.

  9. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  10. THE PROBLEM OF TYPOLOGY OF RUSSIAN INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    V.I. GIDIRINSKY

    2007-05-01

    Full Text Available The author makes an attempt to revise generally accepted in social science approach tothe problem of Russian intelligence. Grounds for necessity of such revision are analyses in thefirst part of the article. It makes possible for the author to point out inadequacy of too gener-al estimations (both positive and negative and show advantages of typological approach

  11. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  12. Solving problems in social-ecological systems: definition, practice and barriers of transdisciplinary research.

    Science.gov (United States)

    Angelstam, Per; Andersson, Kjell; Annerstedt, Matilda; Axelsson, Robert; Elbakidze, Marine; Garrido, Pablo; Grahn, Patrik; Jönsson, K Ingemar; Pedersen, Simen; Schlyter, Peter; Skärbäck, Erik; Smith, Mike; Stjernquist, Ingrid

    2013-03-01

    Translating policies about sustainable development as a social process and sustainability outcomes into the real world of social-ecological systems involves several challenges. Hence, research policies advocate improved innovative problem-solving capacity. One approach is transdisciplinary research that integrates research disciplines, as well as researchers and practitioners. Drawing upon 14 experiences of problem-solving, we used group modeling to map perceived barriers and bridges for researchers' and practitioners' joint knowledge production and learning towards transdisciplinary research. The analysis indicated that the transdisciplinary research process is influenced by (1) the amount of traditional disciplinary formal and informal control, (2) adaptation of project applications to fill the transdisciplinary research agenda, (3) stakeholder participation, and (4) functional team building/development based on self-reflection and experienced leadership. Focusing on implementation of green infrastructure policy as a common denominator for the delivery of ecosystem services and human well-being, we discuss how to diagnose social-ecological systems, and use knowledge production and collaborative learning as treatments.

  13. Obtaining sparse distributions in 2D inverse problems

    OpenAIRE

    Reci, A; Sederman, Andrew John; Gladden, Lynn Faith

    2017-01-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxat...

  14. Comments on mutagenesis risk estimation

    International Nuclear Information System (INIS)

    Russell, W.L.

    1976-01-01

    Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements

  15. REPRESENTATIONS SOCIOLINGUISTIQUES ET DENOMINATION DES DIALECTES BERBERES EN ALGERIE

    Directory of Open Access Journals (Sweden)

    Mourad BEKTACHE

    2014-05-01

    Full Text Available Les mots berbères, tamazight, kabyle, chaoui, mozabite,… sont employés pour désigner une langue, un dialecte d’une langue ou des dialectes d’une même langue. Mais du point de vue linguistique la langue berbère standard n’existe pas. Les locuteurs ont recours à des dénominations génériques pour désigner leur langue (au singulier : celle qu’ils considèrent comme « unifiée, homogène ». Les représentations sociolinguistiques qu’ont les locuteurs berbérophones de leurs pratiques langagières sous-tendent leurs attitudes envers leur langue. Ces attitudes influent le processus de dénomination des dialectes berbères. Cependant au sein de la même communauté (ici kabylophone il existe des dénominations péjoratives qui désignent certains dialectes du berbère. Dans cette étude nous nous intéresserons aux différents noms désignant les dialectes berbères et aux dénominations péjoratives de certains dialectes.

  16. 19 Costs and Benefits of Proliferation of Christian Denominations in ...

    African Journals Online (AJOL)

    Tracie1

    steered up concerns among adherents of religious faiths, onlookers ... proliferation in Nigeria pointing out its costs and benefits and solutions ... movements in Nigeria dates back to the late 19 th ..... appeal of religion to step into political power or competition ... mother Churches to wake up from their spiritual slumber” (p. 43).

  17. Riparian trees as common denominators across the river flow ...

    African Journals Online (AJOL)

    2014-03-04

    Mar 4, 2014 ... may be a valuable indicator for water stress, while the other measurements might provide ... O'Keeffe, 2000) as the life histories of riparian plants are inti- .... Southern Africa, some in the context of groundwater depend- .... and C. gratissimus were spread out next to a ruler on a white .... The data were log.

  18. Pairing symmetry transitions in the even-denominator FQHE system

    International Nuclear Information System (INIS)

    Nomura, Kentaro; Yoshioka, Daijiro

    2001-01-01

    Transitions from a paired quantum Hall state to another quantum Hall state in bilayer systems are discussed in the framework of the edge theory. Starting from the edge theory for the Haldane-Rezayi state, it is shown that the charging effect of a bilayer system which breaks the SU (2) symmetry of the pseudospin shifts the central charge and the conformal dimensions of the fermionic fields which describe the pseudospin sector in the edge theory. This corresponds to the transition from the Haldane-Rezayi state to Halperin's 331 state, or from a singlet d-wave to a triplet p-wave ABM type paired state in the composite fermion picture. Considering interlayer tunneling, the tunneling rate-capacitance phase diagram for the ν=5/2 paired bilayer system is discussed. (author)

  19. Pairing Symmetry Transitions in the Even-Denominator FQHE System

    Science.gov (United States)

    Nomura, Kentaro; Yoshioka, Daijiro

    2001-12-01

    Transitions from a paired quantum Hall state to another quantum Hall state in bilayer systems are discussed in the framework of the edge theory. Starting from the edge theory for the Haldane Rezayi state, it is shown that the charging effect of a bilayer system which breaks the SU(2) symmetry of the pseudospin shifts the central charge and the conformal dimensions of the fermionic fields which describe the pseudospin sector in the edge theory. This corresponds to the transition from the Haldane Rezayi state to Halperin's 331 state, or from a singlet d-wave to a triplet p-wave ABM type paired state in the composite fermion picture. Considering interlayer tunneling, the tunneling rate-capacitance phase diagram for the ν=5/2 paired bilayer system is discussed.

  20. Trauma as common denominator of sexual violence and victimisation

    Directory of Open Access Journals (Sweden)

    Veselinović Nataša I.

    2003-01-01

    Full Text Available Results of researches on biological, psychological and sociological characteristics of sexual offenders show etiological and phenomenological differences, while, on the other side, treatment programs show tendency toward unification. Unification that works contains behavioural learning victim empathy work and work on one’s own trauma. In this paper the author looks for an answer to the question who is the sexual offender and how he became that. In theory rapists and paedophiles are similar as much as their victims are, and they are often victims of some traumatic experience which seeks for satisfaction in inappropriate but well-known way. Sexual violence can be stopped by breaking the circle of its beginning and development by helping sexual perpetrator to find the way out from sexual violence circle and healthier behavioural patterns.

  1. Riparian trees as common denominators across the river flow ...

    African Journals Online (AJOL)

    Riparian tree species, growing under different conditions of water availability, can ... leaf area and increasing wood density correlating with deeper groundwater levels. ... and Sanddrifskloof Rivers (South Africa) under reduced flow conditions.

  2. Public Support for Catholic and Denominational Schools: An International Perspective.

    Science.gov (United States)

    Lawton, Stephen B.

    Government policy on public support for private schools in Sweden, the United States, Australia, Hong Kong, The Netherlands, France and Malta, and Canada is reviewed. In Sweden virtually all schools are government schools funded by local and national grants; only a handful of private schools exist and they receive no government funds. The United…

  3. Exploring the Geography of America's Religious Denominations: A Presbyterian Example

    Science.gov (United States)

    Heatwole, Charles A.

    1977-01-01

    The historically sectional nature of the Presbyterian Church is examined as a case study which illustrates how study of the geography of religious groups can be applied at various academic levels. (AV)

  4. Estimating state-contingent production functions

    DEFF Research Database (Denmark)

    Rasmussen, Svend; Karantininis, Kostas

    The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...

  5. Iterative methods for distributed parameter estimation in parabolic PDE

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R. [Montana State Univ., Bozeman, MT (United States); Wade, J.G. [Bowling Green State Univ., OH (United States)

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  6. Developmental and Individual Differences in Pure Numerical Estimation

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2006-01-01

    The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…

  7. On estimation of the intensity function of a point process

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2010-01-01

    Abstract. Estimation of the intensity function of spatial point processes is a fundamental problem. In this paper, we interpret the Delaunay tessellation field estimator recently introduced by Schaap and Van de Weygaert as an adaptive kernel estimator and give explicit expressions for the mean and

  8. A comparative study of some robust ridge and liu estimators ...

    African Journals Online (AJOL)

    In multiple linear regression analysis, multicollinearity and outliers are two main problems. When multicollinearity exists, biased estimation techniques such as Ridge and Liu Estimators are preferable to Ordinary Least Square. On the other hand, when outliers exist in the data, robust estimators like M, MM, LTS and S ...

  9. The estimation of small probabilities and risk assessment

    International Nuclear Information System (INIS)

    Kalbfleisch, J.D.; Lawless, J.F.; MacKay, R.J.

    1982-01-01

    The primary contribution of statistics to risk assessment is in the estimation of probabilities. Frequently the probabilities in question are small, and their estimation is particularly difficult. The authors consider three examples illustrating some problems inherent in the estimation of small probabilities

  10. Joint Sparsity and Frequency Estimation for Spectral Compressive Sensing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    various interpolation techniques to estimate the continuous frequency parameters. In this paper, we show that solving the problem in a probabilistic framework instead produces an asymptotically efficient estimator which outperforms existing methods in terms of estimation accuracy while still having a low...

  11. Managing Classroom Problems.

    Science.gov (United States)

    Long, James D.

    Schools need to meet unique problems through the development of special classroom management techniques. Factors which contribute to classroom problems include lack of supervision at home, broken homes, economic deprivation, and a desire for peer attention. The educational atmosphere should encourage creativity for both the student and the…

  12. Inverse logarithmic potential problem

    CERN Document Server

    Cherednichenko, V G

    1996-01-01

    The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.

  13. GREECE--SELECTED PROBLEMS.

    Science.gov (United States)

    MARTONFFY, ANDREA PONTECORVO; AND OTHERS

    A CURRICULUM GUIDE IS PRESENTED FOR A 10-WEEK STUDY OF ANCIENT GREEK CIVILIZATION AT THE 10TH-GRADE LEVEL. TEACHING MATERIALS FOR THE UNIT INCLUDE (1) PRIMARY AND SECONDARY SOURCES DEALING WITH THE PERIOD FROM THE BRONZE AGE THROUGH THE HELLENISTIC PERIOD, (2) GEOGRAPHY PROBLEMS, AND (3) CULTURAL MODEL PROBLEM EXERCISES. THOSE CONCEPTS WITH WHICH…

  14. Solar neutrino problem

    Energy Technology Data Exchange (ETDEWEB)

    Faulkner, D J [Australian National Univ., Canberra. Mount Stromlo and Siding Spring Observatories

    1975-10-01

    This paper reviews several recent attempts to solve the problem in terms of modified solar interior models. Some of these have removed the count rate discrepancy, but have violated other observational data for the sun. One successfully accounts for the Davis results at the expense of introducing an ad hoc correction with no current physical explanation. An introductory description of the problem is given.

  15. Reconfigurable layout problem

    NARCIS (Netherlands)

    Meng, G.; Heragu, S.S.; Heragu, S.S.; Zijm, Willem H.M.

    2004-01-01

    This paper addresses the reconfigurable layout problem, which differs from traditional, robust and dynamic layout problems mainly in two aspects: first, it assumes that production data are available only for the current and upcoming production period. Second, it considers queuing performance

  16. The Problems of Dissection.

    Science.gov (United States)

    Davis, Pat

    1997-01-01

    Describes some problems of classroom dissection including the cruelty that animals destined for the laboratory suffer. Discusses the multilevel approach that the National Anti-Vivisection Society (NAVS) has developed to address the problems of animal dissection such as offering a dissection hotline, exhibiting at science teacher conferences, and…

  17. The solar neutrino problem

    International Nuclear Information System (INIS)

    Roxburgh, I.W.

    1981-01-01

    The problems posed by the low flux of neutrinos from the sun detected by Davis and coworkers are reviewed. Several proposals have been advanced to resolve these problems and the more reasonable (in the author's opinion) are presented. Recent claims that the neutrino may have finite mass are also considered. (orig.)

  18. Word Problem Wizardry.

    Science.gov (United States)

    Cassidy, Jack

    1991-01-01

    Presents suggestions for teaching math word problems to elementary students. The strategies take into consideration differences between reading in math and reading in other areas. A problem-prediction game and four self-checking activities are included along with a magic password challenge. (SM)

  19. Problems in baryon spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Capstick, S. [Florida State Univ., Tallahassee, FL (United States)

    1994-04-01

    Current issues and problems in the physics of ground- and excited-state baryons are considered, and are classified into those which should be resolved by CEBAF in its present form, and those which may require CEBAF to undergo an energy upgrade to 8 GeV or more. Recent theoretical developments designed to address these problems are outlined.

  20. Problems Facing Rural Schools.

    Science.gov (United States)

    Stewart, C. E.; And Others

    Problems facing rural Scottish schools range from short term consideration of daily operation to long term consideration of organizational alternatives. Addressed specifically, such problems include consideration of: (1) liaison between a secondary school and its feeder primary schools; (2) preservice teacher training for work in small, isolated…

  1. Adaptive Problem Solving

    Science.gov (United States)

    2017-03-01

    Borrajo and Raquel Fuentetaja, Universidad Carlos III de Madrid on the meta-level search architecture for finding good combinations of representations and...heuristics on a problem-by-problem basis. The other is with Carlos Linares also from Universidad Carlos III de Madrid on developing effective

  2. On vector equilibrium problem

    Indian Academy of Sciences (India)

    [G] Giannessi F, Theorems of alternative, quadratic programs and complementarity problems, in: Variational Inequalities and Complementarity Problems (eds) R W Cottle, F Giannessi and J L Lions (New York: Wiley) (1980) pp. 151±186. [K1] Kazmi K R, Existence of solutions for vector optimization, Appl. Math. Lett. 9 (1996).

  3. Users are problem solvers!

    NARCIS (Netherlands)

    Brouwer-Janse, M.D.

    1991-01-01

    Most formal problem-solving studies use verbal protocol and observational data of problem solvers working on a task. In user-centred product-design projects, observational studies of users are frequently used too. In the latter case, however, systematic control of conditions, indepth analysis and

  4. Problems in quantum mechanics

    CERN Document Server

    Goldman, Iosif Ilich; Geilikman, B T

    2006-01-01

    This challenging book contains a comprehensive collection of problems in nonrelativistic quantum mechanics of varying degrees of difficulty. It features answers and completely worked-out solutions to each problem. Geared toward advanced undergraduates and graduate students, it provides an ideal adjunct to any textbook in quantum mechanics.

  5. Stochastic estimation of electricity consumption

    International Nuclear Information System (INIS)

    Kapetanovic, I.; Konjic, T.; Zahirovic, Z.

    1999-01-01

    Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)

  6. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  7. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  8. Early breastfeeding problems

    DEFF Research Database (Denmark)

    Feenstra, Maria Monberg; Kirkeby, Mette Jørgine; Thygesen, Marianne

    2018-01-01

    Objectives Breastfeeding problems are common and associated with early cessation. Stilllength of postpartum hospital stay has been reduced. This leaves new mothers to establish breastfeeding at home with less support from health care professionals. The objective was to explore mothers’ perspectives...... on when breastfeeding problems were the most challenging and prominent early postnatal. The aim was also toidentify possible factors associated with the breastfeeding problems. Methods In a cross-sectional study, a mixed method approach was used to analyse postal survey data from 1437 mothers with full...... term singleton infants. Content analysis was used to analyse mothers’ open text descriptions of their most challenging breastfeeding problem. Multiple logistic regression was used to calculate odds ratios for early breastfeeding problems according to sociodemographic- and psychosocial factors. Results...

  9. Problems in abstract algebra

    CERN Document Server

    Wadsworth, A R

    2017-01-01

    This is a book of problems in abstract algebra for strong undergraduates or beginning graduate students. It can be used as a supplement to a course or for self-study. The book provides more variety and more challenging problems than are found in most algebra textbooks. It is intended for students wanting to enrich their learning of mathematics by tackling problems that take some thought and effort to solve. The book contains problems on groups (including the Sylow Theorems, solvable groups, presentation of groups by generators and relations, and structure and duality for finite abelian groups); rings (including basic ideal theory and factorization in integral domains and Gauss's Theorem); linear algebra (emphasizing linear transformations, including canonical forms); and fields (including Galois theory). Hints to many problems are also included.

  10. Solved problems in electrochemistry

    International Nuclear Information System (INIS)

    Piron, D.L.

    2004-01-01

    This book presents calculated solutions to problems in fundamental and applied electrochemistry. It uses industrial data to illustrate scientific concepts and scientific knowledge to solve practical problems. It is subdivided into three parts. The first uses modern basic concepts, the second studies the scientific basis for electrode and electrolyte thermodynamics (including E-pH diagrams and the minimum energy involved in transformations) and the kinetics of rate processes (including the energy lost in heat and in parasite reactions). The third part treats larger problems in electrolysis and power generation, as well as in corrosion and its prevention. Each chapter includes three sections: the presentation of useful principles; some twenty problems with their solutions; and, a set of unsolved problems

  11. Problem Solving and Learning

    Science.gov (United States)

    Singh, Chandralekha

    2009-07-01

    One finding of cognitive research is that people do not automatically acquire usable knowledge by spending lots of time on task. Because students' knowledge hierarchy is more fragmented, "knowledge chunks" are smaller than those of experts. The limited capacity of short term memory makes the cognitive load high during problem solving tasks, leaving few cognitive resources available for meta-cognition. The abstract nature of the laws of physics and the chain of reasoning required to draw meaningful inferences makes these issues critical. In order to help students, it is crucial to consider the difficulty of a problem from the perspective of students. We are developing and evaluating interactive problem-solving tutorials to help students in the introductory physics courses learn effective problem-solving strategies while solidifying physics concepts. The self-paced tutorials can provide guidance and support for a variety of problem solving techniques, and opportunity for knowledge and skill acquisition.

  12. Structural Identification Problem

    Directory of Open Access Journals (Sweden)

    Suvorov Aleksei

    2016-01-01

    Full Text Available The identification problem of the existing structures though the Quasi-Newton and its modification, Trust region algorithms is discussed. For the structural problems, which could be represented by means of the mathematical modelling of the finite element code discussed method is extremely useful. The nonlinear minimization problem of the L2 norm for the structures with linear elastic behaviour is solved by using of the Optimization Toolbox of Matlab. The direct and inverse procedures for the composition of the desired function to minimize are illustrated for the spatial 3D truss structure as well as for the problem of plane finite elements. The truss identification problem is solved with 2 and 3 unknown parameters in order to compare the computational efforts and for the graphical purposes. The particular commands of the Matlab codes are present in this paper.

  13. The moment problem

    CERN Document Server

    Schmüdgen, Konrad

    2017-01-01

    This advanced textbook provides a comprehensive and unified account of the moment problem. It covers the classical one-dimensional theory and its multidimensional generalization, including modern methods and recent developments. In both the one-dimensional and multidimensional cases, the full and truncated moment problems are carefully treated separately. Fundamental concepts, results and methods are developed in detail and accompanied by numerous examples and exercises. Particular attention is given to powerful modern techniques such as real algebraic geometry and Hilbert space operators. A wide range of important aspects are covered, including the Nevanlinna parametrization for indeterminate moment problems, canonical and principal measures for truncated moment problems, the interplay between Positivstellensätze and moment problems on semi-algebraic sets, the fibre theorem, multidimensional determinacy theory, operator-theoretic approaches, and the existence theory and important special topics of multidime...

  14. Problems in equilibrium theory

    CERN Document Server

    Aliprantis, Charalambos D

    1996-01-01

    In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.

  15. Portfolio optimization and the random magnet problem

    Science.gov (United States)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  16. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  17. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  18. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  19. Estimation of hand hygiene opportunities on an adult medical ward using 24-hour camera surveillance: validation of the HOW2 Benchmark Study.

    Science.gov (United States)

    Diller, Thomas; Kelly, J William; Blackhurst, Dawn; Steed, Connie; Boeker, Sue; McElveen, Danielle C

    2014-06-01

    We previously published a formula to estimate the number of hand hygiene opportunities (HHOs) per patient-day using the World Health Organization's "Five Moments for Hand Hygiene" methodology (HOW2 Benchmark Study). HHOs can be used as a denominator for calculating hand hygiene compliance rates when product utilization data are available. This study validates the previously derived HHO estimate using 24-hour video surveillance of health care worker hand hygiene activity. The validation study utilized 24-hour video surveillance recordings of 26 patients' hospital stays to measure the actual number of HHOs per patient-day on a medicine ward in a large teaching hospital. Statistical methods were used to compare these results to those obtained by episodic observation of patient activity in the original derivation study. Total hours of data collection were 81.3 and 1,510.8, resulting in 1,740 and 4,522 HHOs in the derivation and validation studies, respectively. Comparisons of the mean and median HHOs per 24-hour period did not differ significantly. HHOs were 71.6 (95% confidence interval: 64.9-78.3) and 73.9 (95% confidence interval: 69.1-84.1), respectively. This study validates the HOW2 Benchmark Study and confirms that expected numbers of HHOs can be estimated from the unit's patient census and patient-to-nurse ratio. These data can be used as denominators in calculations of hand hygiene compliance rates from electronic monitoring using the "Five Moments for Hand Hygiene" methodology. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  20. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  1. Inverse problems in vision and 3D tomography

    CERN Document Server

    Mohamad-Djafari, Ali

    2013-01-01

    The concept of an inverse problem is a familiar one to most scientists and engineers, particularly in the field of signal and image processing, imaging systems (medical, geophysical, industrial non-destructive testing, etc.) and computer vision. In imaging systems, the aim is not just to estimate unobserved images, but also their geometric characteristics from observed quantities that are linked to these unobserved quantities through the forward problem. This book focuses on imagery and vision problems that can be clearly written in terms of an inverse problem where an estimate for the image a

  2. Brauer type embedding problems

    CERN Document Server

    Ledet, Arne

    2005-01-01

    This monograph is concerned with Galois theoretical embedding problems of so-called Brauer type with a focus on 2-groups and on finding explicit criteria for solvability and explicit constructions of the solutions. The advantage of considering Brauer type embedding problems is their comparatively simple condition for solvability in the form of an obstruction in the Brauer group of the ground field. This book presupposes knowledge of classical Galois theory and the attendant algebra. Before considering questions of reducing the embedding problems and reformulating the solvability criteria, the

  3. Astronauts' menu problem.

    Science.gov (United States)

    Lesso, W. G.; Kenyon, E.

    1972-01-01

    Consideration of the problems involved in choosing appropriate menus for astronauts carrying out SKYLAB missions lasting up to eight weeks. The problem of planning balanced menus on the basis of prepackaged food items within limitations on the intake of calories, protein, and certain elements is noted, as well as a number of other restrictions of both physical and arbitrary nature. The tailoring of a set of menus for each astronaut on the basis of subjective rankings of each food by the astronaut in terms of a 'measure of pleasure' is described, and a computer solution to this problem by means of a mixed integer programming code is presented.

  4. Where is the problem?

    International Nuclear Information System (INIS)

    Levy-Leblond, J.-M.

    1990-01-01

    This paper examines the problem of the reduction of the state vector in quantum theory. The author suggest that this issue ceases to cause difficulties if viewed from the correct perspective, for example by giving the state vector an auxiliary rather than fundamental status. He advocates changing the conceptual framework of quantum theory and working with quantons rather than particles and/or waves. He denies that reduction is a psychophysiological problem of observation, and raises the relevance of experimental apparatus. He concludes by venturing the suggestion that the problem of the reduction of the quantum state vector lies, not in quantum theory, but in classical perspectives. (UK)

  5. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  6. Cost Estimation and Control for Flight Systems

    Science.gov (United States)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  7. Estimation of quasi-critical reactivity

    International Nuclear Information System (INIS)

    Racz, A.

    1992-02-01

    The bank of Kalman filter method for reactivity and neutron density estimation originally suggested by D'Attellis and Cortina is critically overviewed. It is pointed out that the procedure cannot be applied reliably in such a form as the authors proposed, due to the filter divegence. An improved method, which is free from devergence problems are presented, as well. A new estimation technique is proposed and tested using computer simulation results. The procedure is applied for the estimation of small reactivity changes. (R.P.) 9 refs.; 2 figs.; 2 tabs

  8. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  9. Calculation of statistic estimates of kinetic parameters from substrate uncompetitive inhibition equation using the median method

    Directory of Open Access Journals (Sweden)

    Pedro L. Valencia

    2017-04-01

    Full Text Available We provide initial rate data from enzymatic reaction experiments and tis processing to estimate the kinetic parameters from the substrate uncompetitive inhibition equation using the median method published by Eisenthal and Cornish-Bowden (Cornish-Bowden and Eisenthal, 1974; Eisenthal and Cornish-Bowden, 1974. The method was denominated the direct linear plot and consists in the calculation of the median from a dataset of kinetic parameters Vmax and Km from the Michaelis–Menten equation. In this opportunity we present the procedure to applicate the direct linear plot to the substrate uncompetitive inhibition equation; a three-parameter equation. The median method is characterized for its robustness and its insensibility to outlier. The calculations are presented in an Excel datasheet and a computational algorithm was developed in the free software Python. The kinetic parameters of the substrate uncompetitive inhibition equation Vmax, Km and Ks were calculated using three experimental points from the dataset formed by 13 experimental points. All the 286 combinations were calculated. The dataset of kinetic parameters resulting from this combinatorial was used to calculate the median which corresponds to the statistic estimator of the real kinetic parameters. A comparative statistical analyses between the median method and the least squares was published in Valencia et al. [3].

  10. BEHAVIOUR PROBLEMS IN CHILDRE

    African Journals Online (AJOL)

    Various other estimates have been advanced. Scrivener3 ... Jnd 13% of the women) had suffered from definite and disabling .... In the majority of cases such difficulties arise out.of ... hygiene programme to alleviate the position would still be.

  11. Applications of elliptic Carleman inequalities to Cauchy and inverse problems

    CERN Document Server

    Choulli, Mourad

    2016-01-01

    This book presents a unified approach to studying the stability of both elliptic Cauchy problems and selected inverse problems. Based on elementary Carleman inequalities, it establishes three-ball inequalities, which are the key to deriving logarithmic stability estimates for elliptic Cauchy problems and are also useful in proving stability estimates for certain elliptic inverse problems. The book presents three inverse problems, the first of which consists in determining the surface impedance of an obstacle from the far field pattern. The second problem investigates the detection of corrosion by electric measurement, while the third concerns the determination of an attenuation coefficient from internal data, which is motivated by a problem encountered in biomedical imaging.

  12. Some mass measurement problems

    International Nuclear Information System (INIS)

    Merritt, J.S.

    1976-01-01

    Concerning the problem of determining the thickness of a target, an uncomplicated approach is to measure its mass and area and take the quotient. This paper examines the mass measurement aspect of such an approach. (author)

  13. The cosmological constant problem

    International Nuclear Information System (INIS)

    Dolgov, A.D.

    1989-05-01

    A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs

  14. Diabetic Eye Problems

    Science.gov (United States)

    ... damage your eyes. The most common problem is diabetic retinopathy. It is a leading cause of blindness ... You need a healthy retina to see clearly. Diabetic retinopathy damages the tiny blood vessels inside your ...

  15. Problems in optics

    CERN Document Server

    Rousseau, Madeleine; Ter Haar, D

    1973-01-01

    This collection of problems and accompanying solutions provide the reader with a full introduction to physical optics. The subject coverage is fairly traditional, with chapters on interference and diffraction, and there is a general emphasis on spectroscopy.

  16. Mouth Problems and HIV

    Science.gov (United States)

    ... teeth (periodontitis), canker sores, oral warts, fever blisters, oral candidiasis (thrush), hairy leukoplakia (which causes a rough, white patch on the tongue), and dental caries. Read More Publications Cover image Mouth Problems + HIV Publication files Download Language English PDF — ...

  17. Enuresis: A Social Problem.

    Science.gov (United States)

    McDonald, James E.

    1978-01-01

    Several theories and treatments of enuresis are described. The authors conclude that enuresis is a social problem (perhaps due to maturational lag, developmental delay or faulty learning) which requires teacher and parental tolerance and understanding. (SE)

  18. Problem Based Game Design

    DEFF Research Database (Denmark)

    Reng, Lars; Schoenau-Fog, Henrik

    2011-01-01

    At Aalborg University’s department of Medialogy, we are utilizing the Problem Based Learning method to encourage students to solve game design problems by pushing the boundaries and designing innovative games. This paper is concerned with describing this method, how students employ it in various ...... projects and how they learn to analyse, design, and develop for innovation by using it. We will present various cases to exemplify the approach and focus on how the method engages students and aspires for innovation in digital entertainment and games.......At Aalborg University’s department of Medialogy, we are utilizing the Problem Based Learning method to encourage students to solve game design problems by pushing the boundaries and designing innovative games. This paper is concerned with describing this method, how students employ it in various...

  19. Neutrosophic Integer Programming Problem

    Directory of Open Access Journals (Sweden)

    Mai Mohamed

    2017-02-01

    Full Text Available In this paper, we introduce the integer programming in neutrosophic environment, by considering coffecients of problem as a triangulare neutrosophic numbers. The degrees of acceptance, indeterminacy and rejection of objectives are simultaneously considered.

  20. Open problems in mathematics

    CERN Document Server

    Nash, Jr, John Forbes

    2016-01-01

    The goal in putting together this unique compilation was to present the current status of the solutions to some of the most essential open problems in pure and applied mathematics. Emphasis is also given to problems in interdisciplinary research for which mathematics plays a key role. This volume comprises highly selected contributions by some of the most eminent mathematicians in the international mathematical community on longstanding problems in very active domains of mathematical research. A joint preface by the two volume editors is followed by a personal farewell to John F. Nash, Jr. written by Michael Th. Rassias. An introduction by Mikhail Gromov highlights some of Nash’s legendary mathematical achievements. The treatment in this book includes open problems in the following fields: algebraic geometry, number theory, analysis, discrete mathematics, PDEs, differential geometry, topology, K-theory, game theory, fluid mechanics, dynamical systems and ergodic theory, cryptography, theoretical computer sc...

  1. [Current problems of deontology].

    Science.gov (United States)

    Dimov, A S

    2010-01-01

    The scope of knowledge in medical ethics continues to extend. Deontology as a science needs systematization of the accumulated data. This review may give impetus to classification of problems pertaining to this important area of medical activity.

  2. Health Problems at School

    Science.gov (United States)

    ... the Word Shop AAP Find a Pediatrician Ages & Stages Prenatal Baby Toddler Preschool Gradeschool Fitness Nutrition Puberty School Teen Young Adult Healthy Children > Ages & Stages > Gradeschool > School > Health Problems at School Ages & Stages ...

  3. Challenging problems in geometry

    CERN Document Server

    Posamentier, Alfred S

    1996-01-01

    Collection of nearly 200 unusual problems dealing with congruence and parallelism, the Pythagorean theorem, circles, area relationships, Ptolemy and the cyclic quadrilateral, collinearity and concurrency and more. Arranged in order of difficulty. Detailed solutions.

  4. A nonlinear oscillatory problem

    International Nuclear Information System (INIS)

    Zhou Qingqing.

    1991-10-01

    We have studied the nonlinear oscillatory problem of orthotropic cylindrical shell, we have analyzed the character of the oscillatory system. The stable condition of the oscillatory system has been given. (author). 6 refs

  5. Problems of research politics

    International Nuclear Information System (INIS)

    Luest, R.

    1977-01-01

    The development in the FRG is portrayed. Illustrated by a particular example, the problems of basic research and of the scientists are presented looking back, looking at the present, and into the future. (WB) [de

  6. Quantum first passage problem

    International Nuclear Information System (INIS)

    Kumar, N.

    1984-07-01

    Quantum first passage problem (QUIPP) is formulated and solved in terms of a constrained Feynman path integral. The related paradox of blocking of unitary evolution by continuous observation on the system implicit in QUIPP is briefly discussed. (author)

  7. Teaching Creative Problem Solving.

    Science.gov (United States)

    Christensen, Kip W.; Martin, Loren

    1992-01-01

    Interpersonal and cognitive skills, adaptability, and critical thinking can be developed through problem solving and cooperative learning in technology education. These skills have been identified as significant needs of the workplace as well as for functioning in society. (SK)

  8. To the confinement problem

    International Nuclear Information System (INIS)

    Savvidi, G.K.

    1985-01-01

    Such a viewpoint is proposed for separation of the physical quantities into observable and unobservable ones, when the latters are connected with the Hermitian operator for which the eigenvalue problem is unsolvable

  9. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  10. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  11. The solar neutrino problem

    International Nuclear Information System (INIS)

    Bahcall, J.N.

    1986-01-01

    The observed capture rate for solar neutrinos in the /sup 37/Cl detector is lower than the predicted capture rate. This discrepancy between theory and observation is known as the 'solar neutrino problem.' The author reviews the basic elements in this problem: the detector efficiency, the theory of stellar (solar) evolution, the nuclear physics of energy generation, and the uncertainties in the predictions. He also answers the questions of: So What? and What Next?

  12. Solving radwaste problems

    International Nuclear Information System (INIS)

    Oyen, L.C.

    1976-01-01

    The combination of regulatory changes and increased waste volume has resulted in design changes in waste processing systems. Problems resulting from waste segregation as a basis for design philosophy are considered, and solutions to the problems are suggested. The importance of operator training, maintenance procedures, good housekeeping, water management, and offsite shipment of solids is discussed. Flowsheets for radioactive waste processing systems for boiling water reactors and pressurized water reactors are included

  13. Problems in fluid flow

    International Nuclear Information System (INIS)

    Brasch, D.J.

    1986-01-01

    Chemical and mineral engineering students require texts which give guidance to problem solving to complement their main theoretical texts. This book has a broad coverage of the fluid flow problems which these students may encounter. The fundamental concepts and the application of the behaviour of liquids and gases in unit operation are dealt with. The book is intended to give numerical practice; development of theory is undertaken only when elaboration of treatments available in theoretical texts is absolutely necessary

  14. The gauge hierarchy problem

    International Nuclear Information System (INIS)

    Natale, A.A.; Shellard, R.C.

    1981-01-01

    The problem of gauge hierarchy in Grand Unified Theories using a toy model with O(N) symmetry is discussed. It is shown that there is no escape to the unnatural adjustment of coupling constants, made only after the computation of several orders in perturbation theory is performed. The propositions of some authors on ways to overcome the gauge hierarchy problem are commented. (Author) [pt

  15. Problems of Forecast

    OpenAIRE

    Kucharavy , Dmitry; De Guio , Roland

    2005-01-01

    International audience; The ability to foresee future technology is a key task of Innovative Design. The paper focuses on the obstacles to reliable prediction of technological evolution for the purpose of Innovative Design. First, a brief analysis of problems for existing forecasting methods is presented. The causes for the complexity of technology prediction are discussed in the context of reduction of the forecast errors. Second, using a contradiction analysis, a set of problems related to ...

  16. Inverse source problems in elastodynamics

    Science.gov (United States)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  17. Ecological problems of fuel reprocessing

    International Nuclear Information System (INIS)

    Huebschmann, W.G.

    1981-01-01

    The problem of the effects of a reprocessing plant to its environment lies in the amount of the handled radioactivity and its longerity. According to the toxicity of the nuclides extensive measures for retainings and filtering are necessary, in order to keep the resulting radiation load in the surrounding within justified limits. The experiences with the WAK prove, that they managed to reduce that radiation load to values that are negligible compared with the natural one. The expected adaptation of the radiation protection legislation to the latest recommendations of the ICRP will in addition help to do more realistic estimations as to the radiotoxicity of certain nuclides (Kr-85, J-129), which means at lower levels than up to now. (orig./HP) [de

  18. Identification problems in linear transformation system

    International Nuclear Information System (INIS)

    Delforge, Jacques.

    1975-01-01

    An attempt was made to solve the theoretical and numerical difficulties involved in the identification problem relative to the linear part of P. Delattre's theory of transformation systems. The theoretical difficulties are due to the very important problem of the uniqueness of the solution, which must be demonstrated in order to justify the value of the solution found. Simple criteria have been found when measurements are possible on all the equivalence classes, but the problem remains imperfectly solved when certain evolution curves are unknown. The numerical difficulties are of two kinds: a slow convergence of iterative methods and a strong repercussion of numerical and experimental errors on the solution. In the former case a fast convergence was obtained by transformation of the parametric space, while in the latter it was possible, from sensitivity functions, to estimate the errors, to define and measure the conditioning of the identification problem then to minimize this conditioning as a function of the experimental conditions [fr

  19. H infinity Integrated Fault Estimation and Fault Tolerant Control of Discrete-time Piecewise Linear Systems

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2012-01-01

    In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

  20. LinvPy : a Python package for linear inverse problems

    OpenAIRE

    Beaud, Guillaume François Paul

    2016-01-01

    The goal of this project is to make a Python package including the tau-estimator algorithm to solve linear inverse problems. The package must be distributed, well documented, easy to use and easy to extend for future developers.

  1. A direct sampling method to an inverse medium scattering problem

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2012-01-01

    In this work we present a novel sampling method for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when

  2. Representational Change and Children's Numerical Estimation

    Science.gov (United States)

    Opfer, John E.; Siegler, Robert S.

    2007-01-01

    We applied overlapping waves theory and microgenetic methods to examine how children improve their estimation proficiency, and in particular how they shift from reliance on immature to mature representations of numerical magnitude. We also tested the theoretical prediction that feedback on problems on which the discrepancy between two…

  3. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  4. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  5. Better Size Estimation for Sparse Matrix Products

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus

    2010-01-01

    We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...

  6. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.

    2016-03-21

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical simulations on noise-free and noisy cases are provided in order to show the effectiveness of the proposed method.

  7. Methods for risk estimation in nuclear energy

    Energy Technology Data Exchange (ETDEWEB)

    Gauvenet, A [CEA, 75 - Paris (France)

    1979-01-01

    The author presents methods for estimating the different risks related to nuclear energy: immediate or delayed risks, individual or collective risks, risks of accidents and long-term risks. These methods have attained a highly valid level of elaboration and their application to other industrial or human problems is currently under way, especially in English-speaking countries.

  8. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  9. Solving Math Problems Approximately: A Developmental Perspective.

    Directory of Open Access Journals (Sweden)

    Dana Ganor-Stern

    Full Text Available Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults' ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger than the exact answer and when it was far (vs. close from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner.

  10. The Elder Problem

    Directory of Open Access Journals (Sweden)

    John W. Elder

    2017-03-01

    Full Text Available This paper presents an autobiographical and biographical historical account of the genesis, evolution and resolution of the Elder Problem. It begins with John W. Elder and his autobiographical story leading to his groundbreaking work on natural convection at Cambridge in the 1960’s. His seminal work published in the Journal of Fluid Mechanics in 1967 became the basis for the modern benchmark of variable density flow simulators that we know today as “The Elder Problem”. There have been well known and major challenges with the Elder Problem model benchmark—notably the multiple solutions that were ultimately uncovered using different numerical models. Most recently, it has been shown that the multiple solutions are indeed physically realistic bifurcation solutions to the Elder Problem and not numerically spurious artefacts. The quandary of the Elder Problem has now been solved—a major scientific breakthrough for fluid mechanics and for numerical modelling. This paper—records, reflections, reminiscences, stories and anecdotes—is an historical autobiographical and biographical memoir. It is the personal story of the Elder Problem told by some of the key scientists who established and solved the Elder Problem. 2017 marks the 50 year anniversary of the classical work by John W. Elder published in Journal of Fluid Mechanics in 1967. This set the stage for this scientific story over some five decades. This paper is a celebration and commemoration of the life and times of John W. Elder, the problem named in his honour, and some of the key scientists who worked on, and ultimately solved, it.

  11. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  12. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  13. The seismic reflection inverse problem

    International Nuclear Information System (INIS)

    Symes, W W

    2009-01-01

    The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)

  14. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  15. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  16. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.

    2015-11-21

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  17. Stability Analysis of Discontinuous Galerkin Approximations to the Elastodynamics Problem

    KAUST Repository

    Antonietti, Paola F.; Ayuso de Dios, Blanca; Mazzieri, Ilario; Quarteroni, Alfio

    2015-01-01

    We consider semi-discrete discontinuous Galerkin approximations of both displacement and displacement-stress formulations of the elastodynamics problem. We prove the stability analysis in the natural energy norm and derive optimal a-priori error estimates. For the displacement-stress formulation, schemes preserving the total energy of the system are introduced and discussed. We verify our theoretical estimates on two and three dimensions test problems.

  18. Statistically Efficient Methods for Pitch and DOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    , it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...

  19. Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Hua

    2018-03-01

    Full Text Available This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives.

  20. Indoor air problems in Asia

    International Nuclear Information System (INIS)

    Leslie, G.B.

    1995-01-01

    Respiratory disease and mortality due to indoor air pollution are amongst the greatest environmental threats to health in the developing countries of Asia. World-wide, acute respiratory infection is the cause of death of at least 5 million children under the age of 5 every year. The World Bank has claimed that smoke from biomass fuels resulted in an estimated 4 million deaths annually amongst infants and children. Most of these deaths occur in developing countries. Combustion in its various forms must head the list of pollution sources in Asia. Combustion of various fuels for domestic heating, lighting and cooking comprises the major source of internally generated pollutants and combustion in industrial plants, power generation and transportation is the major cause of externally generated pollutants. The products of pyrolysis and combustion include many compounds with well-known adverse health effects. These include gases such as CO, CO 2 , NO x and SO 2 , volatile organic compounds such as polynuclear aromatic hydrocarbons and nitroamines as well as respirable particulates of variable composition. The nature and magnitude of the health risks posed by these materials vary with season, climate, location housing, method of ventilation, culture and socio-economic status. The most important cause of lung cancer in non-smokers in Northern Asia is the domestic combustion of smoky coal. Acute carbon monoxide poisoning is common in many Asian countries. Roads traffic exhaust pollution is worse in the major cities of South East Asia than almost anywhere else in the world and this externally generated air pollution forms the indoor air for the urban poor. Despite all these major problems there has been a tendency for international agencies to focus attention and resources on the more trivial problems of indoor air encountered in the affluent countries of the West. Regulatory agencies in Asia have been too frequently persuaded that their problems of indoor air pollution are