WorldWideScience

Sample records for estimation theoretic approach

  1. A theoretical approach to the problem of dose-volume constraint estimation and their impact on the dose-volume histogram selection

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Stavrev, Pavel; Stavreva, Nadia; Fallone, B. Gino

    2006-01-01

    This paper outlines a theoretical approach to the problem of estimating and choosing dose-volume constraints. Following this approach, a method of choosing dose-volume constraints based on biological criteria is proposed. This method is called ''reverse normal tissue complication probability (NTCP) mapping into dose-volume space'' and may be used as a general guidance to the problem of dose-volume constraint estimation. Dose-volume histograms (DVHs) are randomly simulated, and those resulting in clinically acceptable levels of complication, such as NTCP of 5±0.5%, are selected and averaged producing a mean DVH that is proven to result in the same level of NTCP. The points from the averaged DVH are proposed to serve as physical dose-volume constraints. The population-based critical volume and Lyman NTCP models with parameter sets taken from literature sources were used for the NTCP estimation. The impact of the prescribed value of the maximum dose to the organ, D max , on the averaged DVH and the dose-volume constraint points is investigated. Constraint points for 16 organs are calculated. The impact of the number of constraints to be fulfilled based on the likelihood that a DVH satisfying them will result in an acceptable NTCP is also investigated. It is theoretically proven that the radiation treatment optimization based on physical objective functions can sufficiently well restrict the dose to the organs at risk, resulting in sufficiently low NTCP values through the employment of several appropriate dose-volume constraints. At the same time, the pure physical approach to optimization is self-restrictive due to the preassignment of acceptable NTCP levels thus excluding possible better solutions to the problem

  2. Impact of a financial risk-sharing scheme on budget-impact estimations: a game-theoretic approach.

    Science.gov (United States)

    Gavious, Arieh; Greenberg, Dan; Hammerman, Ariel; Segev, Ella

    2014-06-01

    As part of the process of updating the National List of Health Services in Israel, health plans (the 'payers') and manufacturers each provide estimates on the expected number of patients that will utilize a new drug. Currently, payers face major financial consequences when actual utilization is higher than the allocated budget. We suggest a risk-sharing model between the two stakeholders; if the actual number of patients exceeds the manufacturer's prediction, the manufacturer will reimburse the payers by a rebate rate of α from the deficit. In case of under-utilization, payers will refund the government at a rate of γ from the surplus budget. Our study objective was to identify the optimal early estimations of both 'players' prior to and after implementation of the risk-sharing scheme. Using a game-theoretic approach, in which both players' statements are considered simultaneously, we examined the impact of risk-sharing within a given range of rebate proportions, on players' early budget estimations. When increasing manufacturer's rebate α to be over 50 %, then manufacturers will announce a larger number, and health plans will announce a lower number of patients than they would without risk sharing, thus substantially decreasing the gap between their estimates. Increasing γ changes players' estimates only slightly. In reaction to applying a substantial risk-sharing rebate α on the manufacturer, both players are expected to adjust their budget estimates toward an optimal equilibrium. Increasing α is a better vehicle for reaching the desired equilibrium rather than increasing γ, as the manufacturer's rebate α substantially influences both players, whereas γ has little effect on the players behavior.

  3. Validation by theoretical approach to the experimental estimation of efficiency for gamma spectrometry of gas in 100 ml standard flask

    International Nuclear Information System (INIS)

    Mohan, V.; Chudalayandi, K.; Sundaram, M.; Krishnamony, S.

    1996-01-01

    Estimation of gaseous activity forms an important component of air monitoring at Madras Atomic Power Station (MAPS). The gases of importance are argon 41 an air activation product and fission product noble gas xenon 133. For estimating the concentration, the experimental method is used in which a grab sample is collected in a 100 ml volumetric standard flask. The activity of gas is then computed by gamma spectrometry using a predetermined efficiency estimated experimentally. An attempt is made using theoretical approach to validate the experimental method of efficiency estimation. Two analytical models named relative flux model and absolute activity model were developed independently of each other. Attention is focussed on the efficiencies for 41 Ar and 133 Xe. Results show that the present method of sampling and analysis using 100 ml volumetric flask is adequate and acceptable. (author). 5 refs., 2 tabs

  4. A Theoretical Approach

    African Journals Online (AJOL)

    NICO

    L-rhamnose and L-fucose: A Theoretical Approach ... L-ramnose and L-fucose, by means of the Monte Carlo conformational search method. The energy of the conformers ..... which indicates an increased probability for the occurrence of.

  5. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  6. The Theoretical and Empirical Approaches to the Definition of Audit Risk

    Directory of Open Access Journals (Sweden)

    Berezhniy Yevgeniy B.

    2017-12-01

    Full Text Available The risk category is one of the key factors in planning the audit and assessing its results. The article is aimed at generalizing the theoretical and empirical approaches to the definition of audit risk and methods of its reduction. The structure of audit risk was analyzed and it has been determined, that each of researchers approached to structuring of audit risk from the subjective point of view. The author’s own model of audit risk has been proposed. The basic methods of assessment of audit risk are generalized, the theoretical and empirical approaches to its definition are allocated, also it is noted, that application of any of the given models can be suitable rather for approximate estimation, than for exact calculation of an audit risk, as it is accompanied by certain shortcomings.

  7. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  8. Theoretical approaches to elections defining

    OpenAIRE

    Natalya V. Lebedeva

    2011-01-01

    Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.

  9. Theoretical approaches to elections defining

    Directory of Open Access Journals (Sweden)

    Natalya V. Lebedeva

    2011-01-01

    Full Text Available Theoretical approaches to elections defining develop the nature, essence and content of elections, help to determine their place and a role as one of the major national law institutions in democratic system.

  10. Game theoretic approaches for spectrum redistribution

    CERN Document Server

    Wu, Fan

    2014-01-01

    This brief examines issues of spectrum allocation for the limited resources of radio spectrum. It uses a game-theoretic perspective, in which the nodes in the wireless network are rational and always pursue their own objectives. It provides a systematic study of the approaches that can guarantee the system's convergence at an equilibrium state, in which the system performance is optimal or sub-optimal. The author provides a short tutorial on game theory, explains game-theoretic channel allocation in clique and in multi-hop wireless networks and explores challenges in designing game-theoretic m

  11. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  12. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  13. Theoretical Approaches to Political Communication.

    Science.gov (United States)

    Chesebro, James W.

    Political communication appears to be emerging as a theoretical and methodological academic area of research within both speech-communication and political science. Five complimentary approaches to political science (Machiavellian, iconic, ritualistic, confirmational, and dramatistic) may be viewed as a series of variations which emphasize the…

  14. UNCERTAINTY IN NEOCLASSICAL AND KEYNESIAN THEORETICAL APPROACHES: A BEHAVIOURAL PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Sinziana BALTATESCU

    2015-11-01

    Full Text Available The ”mainstream” neoclassical assumptions about human economic behavior are currently challenged by both behavioural researches on human behaviour and other theoretical approaches which, in the context of the recent economic and financial crisis find arguments to reinforce their theoretical statements. The neoclassical “perfect rationality” assumption is most criticized and provokes the mainstream theoretical approach to efforts of revisiting the theoretical framework in order to re-state the economic models validity. Uncertainty seems, in this context, to be the concept that allows other theoretical approaches to take into consideration a more realistic individual from the psychological perspective. This paper is trying to present a comparison between the neoclassical and Keynesian approach of the uncertainty, considering the behavioural arguments and challenges addressed to the mainstream theory.

  15. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  16. Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium

    Energy Technology Data Exchange (ETDEWEB)

    M Weimar

    1998-12-10

    This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.

  17. Theoretical Approaches to Coping

    Directory of Open Access Journals (Sweden)

    Sofia Zyga

    2013-01-01

    Full Text Available Introduction: Dealing with stress requires conscious effort, it cannot be perceived as equal to individual's spontaneous reactions. The intentional management of stress must not be confused withdefense mechanisms. Coping differs from adjustment in that the latter is more general, has a broader meaning and includes diverse ways of facing a difficulty.Aim: An exploration of the definition of the term "coping", the function of the coping process as well as its differentiation from other similar meanings through a literature review.Methodology: Three theoretical approaches of coping are introduced; the psychoanalytic approach; approaching by characteristics; and the Lazarus and Folkman interactive model.Results: The strategic methods of the coping approaches are described and the article ends with a review of the approaches including the functioning of the stress-coping process , the classificationtypes of coping strategies in stress-inducing situations and with a criticism of coping approaches.Conclusions: The comparison of coping in different situations is difficult, if not impossible. The coping process is a slow process, so an individual may select one method of coping under one set ofcircumstances and a different strategy at some other time. Such selection of strategies takes place as the situation changes.

  18. One-dimensional barcode reading: an information theoretic approach

    Science.gov (United States)

    Houni, Karim; Sawaya, Wadih; Delignon, Yves

    2008-03-01

    In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.

  19. The dynamics of alliances. A game theoretical approach

    NARCIS (Netherlands)

    Ridder, A. de

    2007-01-01

    In this dissertation, Annelies de Ridder presents a game theoretical approach to strategic alliances. More specifically, the dynamics of and within alliances have been studied. To do so, four new models have been developed in the game theoretical tradition. Both coalition theory and strategic game

  20. Potential benefits of remote sensing: Theoretical framework and empirical estimate

    Science.gov (United States)

    Eisgruber, L. M.

    1972-01-01

    A theoretical framwork is outlined for estimating social returns from research and application of remote sensing. The approximate dollar magnitude is given of a particular application of remote sensing, namely estimates of corn production, soybeans, and wheat. Finally, some comments are made on the limitations of this procedure and on the implications of results.

  1. Theoretical approaches to social innovation – A critical literature review

    NARCIS (Netherlands)

    Butzin, A.; Davis, A.; Domanski, D.; Dhondt, S.; Howaldt, J.; Kaletka, C.; Kesselring, A.; Kopp, R.; Millard, J.; Oeij, P.; Rehfeld, D.; Schaper-Rinkel, P.; Schwartz, M.; Scoppetta, A.; Wagner-Luptacik, P.; Weber, M.

    2014-01-01

    The SI-DRIVE report “Theoretical approaches to Social Innovation – A Critical Literature Review” delivers a comprehensive overview on the state of the art of theoretically relevant building blocks for advancing a theoretical understanding of social innovation. It collects different theoretical

  2. Radiotherapy problem under fuzzy theoretic approach

    International Nuclear Information System (INIS)

    Ammar, E.E.; Hussein, M.L.

    2003-01-01

    A fuzzy set theoretic approach is used for radiotherapy problem. The problem is faced with two goals: the first is to maximize the fraction of surviving normal cells and the second is to minimize the fraction of surviving tumor cells. The theory of fuzzy sets has been employed to formulate and solve the problem. A linguistic variable approach is used for treating the first goal. The solutions obtained by the modified approach are always efficient and best compromise. A sensitivity analysis of the solutions to the differential weights is given

  3. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  4. Online adaptive approach for a game-theoretic strategy for complete vehicle energy management

    NARCIS (Netherlands)

    Chen, H.; Kessels, J.T.B.A.; Weiland, S.

    2015-01-01

    This paper introduces an adaptive approach for a game-theoretic strategy on Complete Vehicle Energy Management. The proposed method enhances the game-theoretic approach such that the strategy is able to adapt to real driving behavior. The classical game-theoretic approach relies on one probability

  5. A gauge-theoretic approach to gravity.

    Science.gov (United States)

    Krasnov, Kirill

    2012-08-08

    Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.

  6. The person-oriented approach: A short theoretical and practical guide

    Directory of Open Access Journals (Sweden)

    Lars R. Bergman

    2014-05-01

    Full Text Available A short overview of the person-oriented approach is given as a guide to the researcher interested in carrying out person-oriented research. Theoretical, methodological, and practical considerations of the approach are discussed. First, some historical roots are traced, followed by a description of the holisticinteractionistic research paradigm, which provided the general framework for the development of the modern person-oriented approach. The approach has both a theoretical and a methodological facet and after presenting its key theoretical tenets, an overview is given of some common person-oriented methods. Central to the person-oriented approach is a system view with its components together forming a pattern regarded as indivisible. This pattern should be understood and studied as a whole, not broken up into pieces (variables that are studied as separate entities. Hence, usually methodological tools are used by which whole patterns are analysed (e.g. cluster analysis. An empirical example is given where the pattern development of school grades is studied.

  7. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality

    NARCIS (Netherlands)

    L. Waltman (Ludo)

    2011-01-01

    textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic

  8. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    Science.gov (United States)

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. E. Dobrinskaya

    2016-01-01

    Full Text Available The network is an efficient way of social structure analysis for contemporary sociologists. It gives broad opportunities for detailed and fruitful research of different patterns of ties and social relations by quantitative analytical methods and visualization of network models. The network metaphor is used as the most representative tool for description of a new type of society. This new type is characterized by flexibility, decentralization and individualization. Network organizational form became the dominant form in modern societies. The network is also used as a mode of inquiry. Actually three theoretical network approaches in the Internet research case are the most relevant: social network analysis, “network society” theory and actor-network theory. Every theoretical approach has got its own notion of network. Their special methodological and theoretical features contribute to the Internet studies in different ways. The article represents a brief overview of these network approaches. This overview demonstrates the absence of a unified semantic space of the notion of “network” category. This fact, in turn, points out the need for detailed analysis of these approaches to reveal their theoretical and empirical possibilities in application to the Internet studies. 

  10. Preservation of Newspapers: Theoretical Approaches and Practical Achievements

    Science.gov (United States)

    Hasenay, Damir; Krtalic, Maja

    2010-01-01

    The preservation of newspapers is the main topic of this paper. A theoretical overview of newspaper preservation is given, with an emphasis on the importance of a systematic and comprehensive approach. Efficient newspaper preservation implies understanding the meaning of preservation in general, as well as understanding specific approaches,…

  11. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  12. Dramaturgical and Music-Theoretical Approaches to Improvisation Pedagogy

    Science.gov (United States)

    Huovinen, Erkki; Tenkanen, Atte; Kuusinen, Vesa-Pekka

    2011-01-01

    The aim of this article is to assess the relative merits of two approaches to teaching musical improvisation: a music-theoretical approach, focusing on chords and scales, and a "dramaturgical" one, emphasizing questions of balance, variation and tension. Adult students of music pedagogy, with limited previous experience in improvisation,…

  13. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  14. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  15. New Theoretical Approach Integrated Education and Technology

    Science.gov (United States)

    Ding, Gang

    2010-01-01

    The paper focuses on exploring new theoretical approach in education with development of online learning technology, from e-learning to u-learning and virtual reality technology, and points out possibilities such as constructing a new teaching ecological system, ubiquitous educational awareness with ubiquitous technology, and changing the…

  16. BEHAVIORAL INPUTS TO THE THEORETICAL APPROACH OF THE ECONOMIC CRISIS

    Directory of Open Access Journals (Sweden)

    Sinziana BALTATESCU

    2015-09-01

    Full Text Available The current economic and financial crisis gave room for the theoretical debates to reemerge. The economic reality challenged the mainstream neoclassical approach leaving the opportunity for the Austrian School, Post Keynesianism or Institutionalists to bring in front theories that seem to better explain the economic crisis and thus, leaving space for more efficient economic policies to result. In this context, the main assumptions of the mainstream theoretical approach are challenged and reevaluated, behavioral economics is one of the main challengers. Without developing in an integrated school of thought yet, behavioral economics brings new elements within the framework of economic thinking. How are the main theoretical approaches integrating these new elements and whether this process is going to narrow the theory or enrich it to be more comprehensive are questions to which this paper tries to answer, or, at least, to leave room for an answer.

  17. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  18. Partial discharge transients: The field theoretical approach

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson; Crichton, George C

    1998-01-01

    Up until the mid-1980s the theory of partial discharge transients was essentially static. This situation had arisen because of the fixation with the concept of void capacitance and the use of circuit theory to address what is in essence a field problem. Pedersen rejected this approach and instead...... began to apply field theory to the problem of partial discharge transients. In the present paper, the contributions of Pedersen using the field theoretical approach will be reviewed and discussed....

  19. Theoretical and experimental estimates of the Peierls stress

    CSIR Research Space (South Africa)

    Nabarro, FRN

    1997-03-01

    Full Text Available - sidered in its original derivation. It is argued that the conditions of each type of experiment determine whether the P-N or the H formula is appropriate. ? 2. THEORETICAL Peierls's original estimate was based on a simple cubic lattice... with elastic isotropy and Poisson's ratio v. The result was (T z 20p exp [-47r/( 1 - v)]. (1) This value is so small that a detailed discussion of its accuracy would be point- Nabarro (1947) corrected an algebraic error in Peierls's calculation...

  20. Twistor-theoretic approach to topological field theories

    International Nuclear Information System (INIS)

    Ito, Kei.

    1991-12-01

    The two-dimensional topological field theory which describes a four-dimensional self-dual space-time (gravitational instanton) as a target space, which we constructed before, is shown to be deeply connected with Penrose's 'twistor theory'. The relations are presented in detail. Thus our theory offers a 'twistor theoretic' approach to topological field theories. (author)

  1. Child education and management: theoretical approaches on legislation

    Directory of Open Access Journals (Sweden)

    Rúbia Borges

    2017-11-01

    Full Text Available The aim of this work was to investigate theoretical approaches regarding to daycare centers and management, considering childhood education for different audiences, such children and babies on the childhood perspective. On qualitative approach, this research is bibliographical and reflects on official documents about the theme. The development of this research occurred through analysis on educational Brazilian laws, starting by the Federal Constitution (FC, Law of Guidelines and Bases for National Education (LGB, National Curriculum Guidelines and the Education National Plan (ENP. The results point to a generalist legislation that allow certain autonomy on the education. However, there is the need to deepen theoretical and practical studies on the reality of institutions which have the education as the paramount purpose, in order to offer education with quality and attending to the needs from the audience in these institutions.

  2. An Information-Theoretic Approach to PMU Placement in Electric Power Systems

    OpenAIRE

    Li, Qiao; Cui, Tao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D.

    2012-01-01

    This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also ca...

  3. A queer-theoretical approach to community health psychology.

    Science.gov (United States)

    Easpaig, Bróna R Nic Giolla; Fryer, David M; Linn, Seònaid E; Humphrey, Rhianna H

    2014-01-01

    Queer-theoretical resources offer ways of productively rethinking how central concepts such as 'person-context', 'identity' and 'difference' may be understood for community health psychologists. This would require going beyond consideration of the problems with which queer theory is popularly associated to cautiously engage with the aspects of this work relevant to the promotion of collective practice and engaging with processes of marginalisation. In this article, we will draw upon and illustrate the queer-theoretical concepts of 'performativity' and 'cultural intelligibility' before moving towards a preliminary mapping of what a queer-informed approach to community health psychology might involve.

  4. Theoretical Approaches in Evolutionary Ecology: Environmental Feedback as a Unifying Perspective.

    Science.gov (United States)

    Lion, Sébastien

    2018-01-01

    Evolutionary biology and ecology have a strong theoretical underpinning, and this has fostered a variety of modeling approaches. A major challenge of this theoretical work has been to unravel the tangled feedback loop between ecology and evolution. This has prompted the development of two main classes of models. While quantitative genetics models jointly consider the ecological and evolutionary dynamics of a focal population, a separation of timescales between ecology and evolution is assumed by evolutionary game theory, adaptive dynamics, and inclusive fitness theory. As a result, theoretical evolutionary ecology tends to be divided among different schools of thought, with different toolboxes and motivations. My aim in this synthesis is to highlight the connections between these different approaches and clarify the current state of theory in evolutionary ecology. Central to this approach is to make explicit the dependence on environmental dynamics of the population and evolutionary dynamics, thereby materializing the eco-evolutionary feedback loop. This perspective sheds light on the interplay between environmental feedback and the timescales of ecological and evolutionary processes. I conclude by discussing some potential extensions and challenges to our current theoretical understanding of eco-evolutionary dynamics.

  5. A relevance theoretic approach to intertextuality in print advertising

    African Journals Online (AJOL)

    Anonymous vs. acknowledged intertexts: A relevance theoretic approach to intertextuality in print advertising. ... make intertextual references to texts from mass media genres other than advertising as part of an ... AJOL African Journals Online.

  6. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  7. Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue

    Science.gov (United States)

    González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.

    2013-04-01

    Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.

  8. A new theoretical approach to adsorption desorption behavior of Ga on GaAs surfaces

    Science.gov (United States)

    Kangawa, Y.; Ito, T.; Taguchi, A.; Shiraishi, K.; Ohachi, T.

    2001-11-01

    We propose a new theoretical approach for studying adsorption-desorption behavior of atoms on semiconductor surfaces. The new theoretical approach based on the ab initio calculations incorporates the free energy of gas phase; therefore we can calculate how adsorption and desorption depends on growth temperature and beam equivalent pressure (BEP). The versatility of the new theoretical approach was confirmed by the calculation of Ga adsorption-desorption transition temperatures and transition BEPs on the GaAs(0 0 1)-(4×2)β2 Ga-rich surface. This new approach is feasible to predict how adsorption and desorption depend on the growth conditions.

  9. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  10. Theoretical and methodological approaches in discourse analysis.

    Science.gov (United States)

    Stevenson, Chris

    2004-01-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a framework for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  11. Theoretical and methodological approaches in discourse analysis.

    Science.gov (United States)

    Stevenson, Chris

    2004-10-01

    Discourse analysis (DA) embodies two main approaches: Foucauldian DA and radical social constructionist DA. Both are underpinned by social constructionism to a lesser or greater extent. Social constructionism has contested areas in relation to power, embodiment, and materialism, although Foucauldian DA does focus on the issue of power. Embodiment and materialism may be especially relevant for researchers of nursing where the physical body is prominent. However, the contested nature of social constructionism allows a fusion of theoretical and methodological approaches tailored to a specific research interest. In this paper, Chris Stevenson suggests a frame- work for working out and declaring the DA approach to be taken in relation to a research area, as well as to aid anticipating methodological critique. Method, validity, reliability and scholarship are discussed from within a discourse analytic frame of reference.

  12. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    Energy Technology Data Exchange (ETDEWEB)

    Fredriksson, Anders; Olofsson, Isabelle [Golder Associates AB, Uppsala (Sweden)

    2005-12-15

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined.

  13. Rock mechanics site descriptive model-theoretical approach. Preliminary site description Forsmark area - version 1.2

    International Nuclear Information System (INIS)

    Fredriksson, Anders; Olofsson, Isabelle

    2005-12-01

    The present report summarises the theoretical approach to estimate the mechanical properties of the rock mass in relation to the Preliminary Site Descriptive Modelling, version 1.2 Forsmark. The theoretical approach is based on a discrete fracture network (DFN) description of the fracture system in the rock mass and on the results of mechanical testing of intact rock and on rock fractures. To estimate the mechanical properties of the rock mass a load test on a rock block with fractures is simulated with the numerical code 3DEC. The location and size of the fractures are given by DFN-realisations. The rock block was loaded in plain strain condition. From the calculated relationship between stresses and deformations the mechanical properties of the rock mass were determined. The influence of the geometrical properties of the fracture system on the mechanical properties of the rock mass was analysed by loading 20 blocks based on different DFN-realisations. The material properties of the intact rock and the fractures were kept constant. The properties are set equal to the mean value of each measured material property. The influence of the variation of the properties of the intact rock and variation of the mechanical properties of the fractures are estimated by analysing numerical load tests on one specific block (one DFN-realisation) with combinations of properties for intact rock and fractures. Each parameter varies from its lowest values to its highest values while the rest of the parameters are held constant, equal to the mean value. The resulting distribution was expressed as a variation around the value determined with mean values on all parameters. To estimate the resulting distribution of the mechanical properties of the rock mass a Monte-Carlo simulation was performed by generating values from the two distributions independent of each other. The two values were added and the statistical properties of the resulting distribution were determined

  14. An attempt of classification of theoretical approaches to national identity

    Directory of Open Access Journals (Sweden)

    Milošević-Đorđević Jasna S.

    2003-01-01

    Full Text Available It is compulsory that complex social concepts should be defined in different ways and approached from the perspective of different science disciplines. Therefore, it is difficult to precisely define them without overlapping of meaning with other similar concepts. This paper has made an attempt towards theoretical classification of the national identity and differentiate that concept in comparison to the other related concepts (race, ethnic group, nation, national background, authoritativeness, patriarchy. Theoretical assessments are classified into two groups: ones that are dealing with nature of national identity and others that are stating one or more dimensions of national identity, crucial for its determination. On the contrary to the primordialistic concept of national identity, describing it as a fundamental, deeply rooted human feature, there are many numerous contemporary theoretical approaches (instrumentalist, constructivist, functionalistic, emphasizing changeable, fluid, instrumentalist function of the national identity. Fundamental determinants of national identity are: language, culture (music, traditional myths, state symbols (territory, citizenship, self-categorization, religion, set of personal characteristics and values.

  15. A filtering approach to edge preserving MAP estimation of images.

    Science.gov (United States)

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  16. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach.

    Science.gov (United States)

    Cobden, David S; Niessen, Louis W; Rutten, Frans Fh; Redekop, W Ken

    2010-09-07

    While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis. We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM) and oral (OAD) medications. Two analyses were performed, one which ignored adherence (analysis 1) and one which incorporated it (analysis 2). Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios. In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY) gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY). This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM. Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health economic outcomes, and validation of different approaches to modeling adherence, is warranted.

  17. Multiple stakeholders in road pricing: A game theoretic approach

    NARCIS (Netherlands)

    Ohazulike, Anthony; Still, Georg J.; Kern, Walter; van Berkum, Eric C.; Hausken, Kjell; Zhuang, Jun

    2015-01-01

    We investigate a game theoretic approach as an alternative to the standard multi-objective optimization models for road pricing. Assuming that various, partly conflicting traffic externalities (congestion, air pollution, noise, safety, etcetera) are represented by corresponding players acting on a

  18. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  19. Theoretical Approaches to Nuclear Proliferation

    Directory of Open Access Journals (Sweden)

    Konstantin S. Tarasov

    2015-01-01

    Full Text Available This article analyses discussions between representatives of three schools in the theory of international relations - realism, liberalism and constructivism - on the driving factors of nuclear proliferation. The paper examines major theoretical approaches, outlined in the studies of Russian and foreign scientists, to the causes of nuclear weapons development, while unveiling their advantages and limitations. Much of the article has been devoted to alternative approaches, particularly, the role of mathematical modeling in assessing proliferation risks. The analysis also reveals a variety of different approaches to nuclear weapons acquisition, as well as the absence of a comprehensive proliferation theory. Based on the research results the study uncovers major factors both favoring and impeding nuclear proliferation. The author shows that the lack of consensus between realists, liberals and constructivists on the nature of proliferation led a number of scientists to an attempt to explain nuclear rationale by drawing from the insights of more than one school in the theory of IR. Detailed study of the proliferation puzzle contributes to a greater understating of contemporary international realities, helps to identify mechanisms that are most likely to deter states from obtaining nuclear weapons and is of the outmost importance in predicting short- and long-term security environment. Furthermore, analysis of the existing scientific literature on nuclear proliferation helps to determine future research agenda of the subject at hand.

  20. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    Science.gov (United States)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  1. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    International Nuclear Information System (INIS)

    Siddharth, S; Ali, A S; El-Sheimy, N; Goodall, C L; Syed, Z F

    2012-01-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency. (paper)

  2. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  3. A sign-theoretic approach to biotechnology

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio

    ” semiotic networks across hierarchical levels and for relating the different emergent codes in living systems. I consider this an important part of the work because there I define some of the main concepts that will help me to analyse different codes and semiotic processes in living systems in order...... to exemplify what is the relevance of a sign-theoretic approach to biotechnology. In particular, I introduce the notion of digital-analogical consensus as a semiotic pattern for the creation of complex logical products that constitute specific signs. The chapter ends with some examples of conspicuous semiotic...... to exemplify how a semiotic approach can be of help when organising the knowledge that can lead us to understanding the relevance, the role and the position of signal transduction networks in relation to the larger semiotic networks in which they function, i.e.: in the hierarchical formal processes of mapping...

  4. A System Theoretical Inspired Approach to Knowledge Construction

    DEFF Research Database (Denmark)

    Mathiasen, Helle

    2008-01-01

    student's knowledge construction, in the light of operative constructivism, inspired by the German sociologist N. Luhmann's system theoretical approach to epistemology. Taking observations as operations based on distinction and indication (selection) contingency becomes a fundamental condition in learning......  Abstract The aim of this paper is to discuss the relation between teaching and learning. The point of departure is that teaching environments (communication forums) is a potential facilitator for learning processes and knowledge construction. The paper present a theoretical frame work, to discuss...... processes, and a condition which teaching must address as far as teaching strives to stimulate non-random learning outcomes. Thus learning outcomes understood as the individual learner's knowledge construction cannot be directly predicted from events and characteristics in the environment. This has...

  5. A theoretical approach to artificial intelligence systems in medicine.

    Science.gov (United States)

    Spyropoulos, B; Papagounos, G

    1995-10-01

    The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.

  6. A game theoretic approach to a finite-time disturbance attenuation problem

    Science.gov (United States)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  7. Theoretical analysis of the distribution of isolated particles in totally asymmetric exclusion processes: Application to mRNA translation rate estimation

    Science.gov (United States)

    Dao Duc, Khanh; Saleem, Zain H.; Song, Yun S.

    2018-01-01

    The Totally Asymmetric Exclusion Process (TASEP) is a classical stochastic model for describing the transport of interacting particles, such as ribosomes moving along the messenger ribonucleic acid (mRNA) during translation. Although this model has been widely studied in the past, the extent of collision between particles and the average distance between a particle to its nearest neighbor have not been quantified explicitly. We provide here a theoretical analysis of such quantities via the distribution of isolated particles. In the classical form of the model in which each particle occupies only a single site, we obtain an exact analytic solution using the matrix ansatz. We then employ a refined mean-field approach to extend the analysis to a generalized TASEP with particles of an arbitrary size. Our theoretical study has direct applications in mRNA translation and the interpretation of experimental ribosome profiling data. In particular, our analysis of data from Saccharomyces cerevisiae suggests a potential bias against the detection of nearby ribosomes with a gap distance of less than approximately three codons, which leads to some ambiguity in estimating the initiation rate and protein production flux for a substantial fraction of genes. Despite such ambiguity, however, we demonstrate theoretically that the interference rate associated with collisions can be robustly estimated and show that approximately 1% of the translating ribosomes get obstructed.

  8. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  9. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  10. Enhanced diffusion under alpha self-irradiation in spent nuclear fuel: Theoretical approaches

    International Nuclear Information System (INIS)

    Ferry, Cecile; Lovera, Patrick; Poinssot, Christophe; Garcia, Philippe

    2005-01-01

    Various theoretical approaches have been developed in order to estimate the enhanced diffusion coefficient of fission products under alpha self-irradiation in spent nuclear fuel. These simplified models calculate the effects of alpha particles and recoil atoms on mobility of uranium atoms in UO 2 . They lead to a diffusion coefficient which is proportional to the volume alpha activity with a proportionality factor of about 10 -44 (m 5 ). However, the same models applied for fission lead to a radiation-enhanced diffusion coefficient which is approximately two orders of magnitude lower than values reported in literature for U and Pu. Other models are based on an extrapolation of radiation-enhanced diffusion measured either in reactors or under heavy ion bombardment. These models lead to a proportionality factor between the alpha self-irradiation enhanced diffusion coefficient and the volume alpha activity of 2 x 10 -41 (m 5 )

  11. Theoretical triangulation as an approach for revealing the complexity of a classroom discussion

    NARCIS (Netherlands)

    van Drie, J.; Dekker, R.

    2013-01-01

    In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and

  12. Monoenergetic approximation of a polyenergetic beam: a theoretical approach

    International Nuclear Information System (INIS)

    Robinson, D.M.; Scrimger, J.W.

    1991-01-01

    There exist numerous occasions in which it is desirable to approximate the polyenergetic beams employed in radiation therapy by a beam of photons of a single energy. In some instances, commonly used rules of thumb for the selection of an appropriate energy may be valid. A more accurate approximate energy, however, may be determined by an analysis which takes into account both the spectral qualities of the beam and the material through which it passes. The theoretical basis of this method of analysis is presented in this paper. Experimental agreement with theory for a range of materials and beam qualities is also presented and demonstrates the validity of the theoretical approach taken. (author)

  13. Blogging in Higher Education: Theoretical and Practical Approach

    OpenAIRE

    Gulfidan CAN; Devrim OZDEMIR

    2006-01-01

    In this paper the blogging method, which includes new forms of writing, is supported as an alternative approach to address the frequently asserted problems in higher education such as product-oriented assessment and lack of value given to students' writing as contribution to the discourse of the academic disciplines. Both theoretical and research background information is provided to clarify the rationale of using this method in higher education. Furthermore, recommended way of using this met...

  14. Principle-theoretic approach of kondo and construction-theoretic formalism of gauge theories

    International Nuclear Information System (INIS)

    Jain, L.C.

    1986-01-01

    Einstein classified various theories in physics as principle-theories and constructive-theories. In this lecture Kondo's approach to microscopic and macroscopic phenomena is analysed for its principle theoretic pursuit as followed by construction. The fundamentals of his theory may be recalled as Tristimulus principle, Observation principle, Kawaguchi spaces, empirical information, epistemological point of view, unitarity, intrinsicality, and dimensional analysis subject to logical and geometrical achievement. On the other hand, various physicists have evolved constructive gauge theories through the phenomenological point of view, often a collective one. Their synthetic method involves fibre bundles and connections, path integrals as well as other hypothetical structures. They lead towards clarity, completeness and adaptability

  15. Theoretical estimates of spherical and chromatic aberration in photoemission electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Fitzgerald, J.P.S., E-mail: fit@pdx.edu; Word, R.C.; Könenkamp, R.

    2016-01-15

    We present theoretical estimates of the mean coefficients of spherical and chromatic aberration for low energy photoemission electron microscopy (PEEM). Using simple analytic models, we find that the aberration coefficients depend primarily on the difference between the photon energy and the photoemission threshold, as expected. However, the shape of the photoelectron spectral distribution impacts the coefficients by up to 30%. These estimates should allow more precise correction of aberration in PEEM in experimental situations where the aberration coefficients and precise electron energy distribution cannot be readily measured. - Highlights: • Spherical and chromatic aberration coefficients of the accelerating field in PEEM. • Compact, analytic expressions for coefficients depending on two emission parameters. • Effect of an aperture stop on the distribution is also considered.

  16. Theoretical estimation and validation of radiation field in alkaline hydrolysis plant

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Sanjay; Krishnamohanan, T.; Gopalakrishnan, R.K., E-mail: singhs@barc.gov.in [Radiation Safety Systems Division, Bhabha Atomic Research Centre, Mumbai (India); Anand, S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai (India); Pancholi, K. C. [Waste Management Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Spent organic solvent (30% TBP + 70% n-Dodecane) from reprocessing facility is treated at ETP in Alkaline Hydrolysis Plant (AHP) and Organic Waste Incineration (ORWIN) Facility. In AHP-ORWIN, there are three horizontal cylindrical tanks having 2.0 m{sup 3} operating capacity used for waste storage and transfer. The three tanks are, Aqueous Waste Tank (AWT), Waste Receiving Tank (WRT) and Dodecane Waste Tank (DWT). These tanks are en-housed in a shielded room in this facility. Monte Carlo N-Particle (MCNP) radiation transport code was used to estimate ambient radiation field levels when the storage tanks are having hold up volumes of desired specific activity levels. In this paper the theoretically estimated values of radiation field is compared with the actual measured dose.

  17. Theoretical and expert system approach to photoionization theories

    Directory of Open Access Journals (Sweden)

    Petrović Ivan D.

    2016-01-01

    Full Text Available The influence of the ponderomotive and the Stark shifts on the tunneling transition rate was observed, for non-relativistic linearly polarized laser field for alkali atoms, with three different theoretical models, the Keldysh theory, the Perelomov, Popov, Terent'ev (PPT theory, and the Ammosov, Delone, Krainov (ADK theory. We showed that aforementioned shifts affect the transition rate differently for different approaches. Finally, we presented a simple expert system for analysis of photoionization theories.

  18. Overview of the Practical and Theoretical Approaches to the Estimation of Mineral Resources. A Financial Perspective

    Directory of Open Access Journals (Sweden)

    Leontina Pavaloaia

    2012-10-01

    Full Text Available Mineral resources represent an important natural resource whose exploitation, unless it is rational, can lead to their exhaustion and the collapse of sustainable development. Given the importance of mineral resources and the uncertainty concerning the estimation of extant reserves, they have been analyzed by several national and international institutions. In this article we shall present a few aspects concerning the ways to approach the reserves of mineral resources at national and international level, by considering both economic aspects and those aspects concerned with the definition, classification and aggregation of the reserves of mineral resources by various specialized institutions. At present there are attempts to homogenize practices concerning these aspects for the purpose of presenting correct and comparable information.

  19. A game-theoretic approach to real-time system testing

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao

    2008-01-01

    This paper presents a game-theoretic approach to the testing of uncontrollable real-time systems. By modelling the systems with Timed I/O Game Automata and specifying the test purposes as Timed CTL formulas, we employ a recently developed timed game solver UPPAAL-TIGA to synthesize winning...... strategies, and then use these strategies to conduct black-box conformance testing of the systems. The testing process is proved to be sound and complete with respect to the given test purposes. Case study and preliminary experimental results indicate that this is a viable approach to uncontrollable timed...... system testing....

  20. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  1. Introduction to superfluidity field-theoretical approach and applications

    CERN Document Server

    Schmitt, Andreas

    2015-01-01

    Superfluidity – and closely related to it, superconductivity – are very general phenomena that can occur on vastly different energy scales. Their underlying theoretical mechanism of spontaneous symmetry breaking is even more general and applies to a multitude of physical systems.  In these lecture notes, a pedagogical introduction to the field-theory approach to superfluidity is presented. The connection to more traditional approaches, often formulated in a different language, is carefully explained in order to provide a consistent picture that is useful for students and researchers in all fields of physics. After introducing the basic concepts, such as the two-fluid model and the Goldstone mode, selected topics of current research are addressed, such as the BCS-BEC crossover and Cooper pairing with mismatched Fermi momenta.

  2. A Markov game theoretic data fusion approach for cyber situational awareness

    Science.gov (United States)

    Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik

    2007-04-01

    This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.

  3. Nuclear Fermi Dynamics: physical content versus theoretical approach

    International Nuclear Information System (INIS)

    Griffin, J.J.

    1977-01-01

    Those qualitative properties of nuclei, and of their energetic collisions, which seem of most importance for the flow of nuclear matter are listed and briefly discussed. It is suggested that nuclear matter flow is novel among fluid dynamical problems. The name, Nuclear Fermi Dynamics, is proposed as an appropriate unambiguous label. The Principle of Commensurability, which suggests the measurement of the theoretical content of an approach against its expected predictive range is set forth and discussed. Several of the current approaches to the nuclear matter flow problem are listed and subjected to such a test. It is found that the Time-Dependent Hartree-Fock (TDHF) description, alone of all the major theoretical approaches currently in vogue, incorporates each of the major qualitative features within its very concise single mathematical assumption. Some limitations of the conventional TDHF method are noted, and one particular defect is discussed in detail: the Spurious Cross Channel Correlations which arise whenever several asymptotic reaction channels must be simultaneously described by a single determinant. A reformulated Time-Dependent-S-Matrix Hartree-Fock Theory is proposed, which obviates this difficulty. It is noted that the structure of TD-S-HF can be applied to a more general class of non-linear wave mechanical problems than simple TDHF. Physical requirements minimal to assure that TD-S-HF represents a sensible reaction theory are utilized to prescribe the definition of acceptable asymptotic channels. That definition, in turn, defines the physical range of the TD-S-HF theory as the description of collisions of certain mathematically well-defined objects of mixed quantal and classical character, the ''TDHF droplets.''

  4. Understanding employee motivation and organizational performance: Arguments for a set-theoretic approach

    Directory of Open Access Journals (Sweden)

    Michael T. Lee

    2016-09-01

    Full Text Available Empirical evidence demonstrates that motivated employees mean better organizational performance. The objective of this conceptual paper is to articulate the progress that has been made in understanding employee motivation and organizational performance, and to suggest how the theory concerning employee motivation and organizational performance may be advanced. We acknowledge the existing limitations of theory development and suggest an alternative research approach. Current motivation theory development is based on conventional quantitative analysis (e.g., multiple regression analysis, structural equation modeling. Since researchers are interested in context and understanding of this social phenomena holistically, they think in terms of combinations and configurations of a set of pertinent variables. We suggest that researchers take a set-theoretic approach to complement existing conventional quantitative analysis. To advance current thinking, we propose a set-theoretic approach to leverage employee motivation for organizational performance.

  5. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    Science.gov (United States)

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  6. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  7. SOCIOLOGICAL UNDERSTANDING OF INTERNET: THEORETICAL APPROACHES TO THE NETWORK ANALYSIS

    Directory of Open Access Journals (Sweden)

    D. E. Dobrinskaya

    2016-01-01

    Full Text Available Internet studies are carried out by various scientific disciplines and in different research perspectives. Sociological studies of the Internet deal with a new technology, a revolutionary means of mass communication and a social space. There is a set of research difficulties associated with the Internet. Firstly, the high speed and wide spread of Internet technologies’ development. Secondly, the collection and filtration of materials concerning with Internet studies. Lastly, the development of new conceptual categories, which are able to reflect the impact of the Internet development in contemporary world. In that regard the question of the “network” category use is essential. Network is the base of Internet functioning, on the one hand. On the other hand, network is the ground for almost all social interactions in modern society. So such society is called network society. Three theoretical network approaches in the Internet research case are the most relevant: network society theory, social network analysis and actor-network theory. Each of these theoretical approaches contributes to the study of the Internet. They shape various images of interactions between human beings in their entity and dynamics. All these approaches also provide information about the nature of these interactions. 

  8. Estimation of strong ground motion

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1993-01-01

    Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event

  9. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach

    International Nuclear Information System (INIS)

    Olofsson, Isabelle; Fredriksson, Anders

    2005-05-01

    The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one

  10. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  11. An Activity Theoretical Approach to Social Interaction during Study Abroad

    Science.gov (United States)

    Shively, Rachel L.

    2016-01-01

    This case study examines how one study abroad student oriented to social interaction during a semester in Spain. Using an activity theoretical approach, the findings indicate that the student not only viewed social interaction with his Spanish host family and an expert-Spanish-speaking age peer as an opportunity for second language (L2) learning,…

  12. A novel approach for absolute radar calibration: formulation and theoretical validation

    Directory of Open Access Journals (Sweden)

    C. Merker

    2015-06-01

    Full Text Available The theoretical framework of a novel approach for absolute radar calibration is presented and its potential analysed by means of synthetic data to lay out a solid basis for future practical application. The method presents the advantage of an absolute calibration with respect to the directly measured reflectivity, without needing a previously calibrated reference device. It requires a setup comprising three radars: two devices oriented towards each other, measuring reflectivity along the same horizontal beam and operating within a strongly attenuated frequency range (e.g. K or X band, and one vertical reflectivity and drop size distribution (DSD profiler below this connecting line, which is to be calibrated. The absolute determination of the calibration factor is based on attenuation estimates. Using synthetic, smooth and geometrically idealised data, calibration is found to perform best using homogeneous precipitation events with rain rates high enough to ensure a distinct attenuation signal (reflectivity above ca. 30 dBZ. Furthermore, the choice of the interval width (in measuring range gates around the vertically pointing radar, needed for attenuation estimation, is found to have an impact on the calibration results. Further analysis is done by means of synthetic data with realistic, inhomogeneous precipitation fields taken from measurements. A calibration factor is calculated for each considered case using the presented method. Based on the distribution of the calculated calibration factors, the most probable value is determined by estimating the mode of a fitted shifted logarithmic normal distribution function. After filtering the data set with respect to rain rate and inhomogeneity and choosing an appropriate length of the considered attenuation path, the estimated uncertainty of the calibration factor is of the order of 1 to 11 %, depending on the chosen interval width. Considering stability and accuracy of the method, an interval of

  13. Towards a Set Theoretical Approach to Big Data Analytics

    DEFF Research Database (Denmark)

    Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi

    2014-01-01

    Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal...... this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M....... and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss...

  14. Decision support models for solid waste management: Review and game-theoretic approaches

    International Nuclear Information System (INIS)

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2013-01-01

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed

  15. Decision support models for solid waste management: Review and game-theoretic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Karmperis, Athanasios C., E-mail: athkarmp@mail.ntua.gr [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)

    2013-05-15

    Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.

  16. Analytic game—theoretic approach to ground-water extraction

    Science.gov (United States)

    Loáiciga, Hugo A.

    2004-09-01

    The roles of cooperation and non-cooperation in the sustainable exploitation of a jointly used groundwater resource have been quantified mathematically using an analytical game-theoretic formulation. Cooperative equilibrium arises when ground-water users respect water-level constraints and consider mutual impacts, which allows them to derive economic benefits from ground-water indefinitely, that is, to achieve sustainability. This work shows that cooperative equilibrium can be obtained from the solution of a quadratic programming problem. For cooperative equilibrium to hold, however, enforcement must be effective. Otherwise, according to the commonized costs-privatized profits paradox, there is a natural tendency towards non-cooperation and non-sustainable aquifer mining, of which overdraft is a typical symptom. Non-cooperative behavior arises when at least one ground-water user neglects the externalities of his adopted ground-water pumping strategy. In this instance, water-level constraints may be violated in a relatively short time and the economic benefits from ground-water extraction fall below those obtained with cooperative aquifer use. One example illustrates the game theoretic approach of this work.

  17. Theoretical approaches to determining the financial provision of public transportation

    Directory of Open Access Journals (Sweden)

    O.A. Vygovska

    2018-03-01

    Full Text Available The work is devoted to the improvement of theoretical approaches in determining the financial provision of transportation by public transport at the regional level. The author summarizes the concept of the «financial security» and defines the main difference from the term «financing». The systematization of key differences in the financial provision of a transport company from other financial entities of the economic sector at the national and regional levels is carried out. The disadvantages and advantages of sources of financial support are analyzed. The purpose of the article is to study theoretical approaches in determining the financial provision of transportation by public transport at the regional level. The prospects for further scientific research are the need to identify new scientific approaches and techniques to substantiate and elaborate the concept of the «financial provision of transportation by public transport». The practical application of the research should be formed in a detailed analysis of cash flow streams in the system of «state – regional authority – economic entity». The financial provision of transportation by public transport at the regional level has not been given the sufficient attention in the scientific research within the country. This fact confirms the need for a thorough analysis of the transport industry as a whole.

  18. Vanadium supersaturated silicon system: a theoretical and experimental approach

    Science.gov (United States)

    Garcia-Hemme, Eric; García, Gregorio; Palacios, Pablo; Montero, Daniel; García-Hernansanz, Rodrigo; Gonzalez-Diaz, Germán; Wahnon, Perla

    2017-12-01

    The effect of high dose vanadium ion implantation and pulsed laser annealing on the crystal structure and sub-bandgap optical absorption features of V-supersaturated silicon samples has been studied through the combination of experimental and theoretical approaches. Interest in V-supersaturated Si focusses on its potential as a material having a new band within the Si bandgap. Rutherford backscattering spectrometry measurements and formation energies computed through quantum calculations provide evidence that V atoms are mainly located at interstitial positions. The response of sub-bandgap spectral photoconductance is extended far into the infrared region of the spectrum. Theoretical simulations (based on density functional theory and many-body perturbation in GW approximation) bring to light that, in addition to V atoms at interstitial positions, Si defects should also be taken into account in explaining the experimental profile of the spectral photoconductance. The combination of experimental and theoretical methods provides evidence that the improved spectral photoconductance up to 6.2 µm (0.2 eV) is due to new sub-bandgap transitions, for which the new band due to V atoms within the Si bandgap plays an essential role. This enables the use of V-supersaturated silicon in the third generation of photovoltaic devices.

  19. How cells engulf: a review of theoretical approaches to phagocytosis

    Science.gov (United States)

    Richards, David M.; Endres, Robert G.

    2017-12-01

    Phagocytosis is a fascinating process whereby a cell surrounds and engulfs particles such as bacteria and dead cells. This is crucial both for single-cell organisms (as a way of acquiring nutrients) and as part of the immune system (to destroy foreign invaders). This whole process is hugely complex and involves multiple coordinated events such as membrane remodelling, receptor motion, cytoskeleton reorganisation and intracellular signalling. Because of this, phagocytosis is an excellent system for theoretical study, benefiting from biophysical approaches combined with mathematical modelling. Here, we review these theoretical approaches and discuss the recent mathematical and computational models, including models based on receptors, models focusing on the forces involved, and models employing energetic considerations. Along the way, we highlight a beautiful connection to the physics of phase transitions, consider the role of stochasticity, and examine links between phagocytosis and other types of endocytosis. We cover the recently discovered multistage nature of phagocytosis, showing that the size of the phagocytic cup grows in distinct stages, with an initial slow stage followed by a much quicker second stage starting around half engulfment. We also address the issue of target shape dependence, which is relevant to both pathogen infection and drug delivery, covering both one-dimensional and two-dimensional results. Throughout, we pay particular attention to recent experimental techniques that continue to inform the theoretical studies and provide a means to test model predictions. Finally, we discuss population models, connections to other biological processes, and how physics and modelling will continue to play a key role in future work in this area.

  20. Supply chain collaboration: A Game-theoretic approach to profit allocation

    Energy Technology Data Exchange (ETDEWEB)

    Ponte, B.; Fernández, I.; Rosillo, R.; Parreño, J.; García, N.

    2016-07-01

    Purpose: This paper aims to develop a theoretical framework for profit allocation, as a mechanism for aligning incentives, in collaborative supply chains. Design/methodology/approach: The issue of profit distribution is approached from a game-theoretic perspective. We use the nucleolus concept. The framework is illustrated through a numerical example based on the Beer Game scenario. Findings: The nucleolus offers a powerful perspective to tackle this problem, as it takes into consideration the bargaining power of the different echelons. We show that this framework outperforms classical alternatives. Research limitations/implications: The allocation of the overall supply chain profit is analyzed from a static perspective. Considering the dynamic nature of the problem would be an interesting next step. Practical implications: We provide evidence of drawbacks derived from classical solutions to the profit allocation problem. Real-world collaborative supply chains need of robust mechanisms like the one tackled in this work to align incentives from the various actors. Originality/value: Adopting an efficient collaborative solution is a major challenge for supply chains, since it is a wide and complex process that requires an appropriate scheme. Within this framework, profit allocation is essential.

  1. Supply chain collaboration: A Game-theoretic approach to profit allocation

    International Nuclear Information System (INIS)

    Ponte, B.; Fernández, I.; Rosillo, R.; Parreño, J.; García, N.

    2016-01-01

    Purpose: This paper aims to develop a theoretical framework for profit allocation, as a mechanism for aligning incentives, in collaborative supply chains. Design/methodology/approach: The issue of profit distribution is approached from a game-theoretic perspective. We use the nucleolus concept. The framework is illustrated through a numerical example based on the Beer Game scenario. Findings: The nucleolus offers a powerful perspective to tackle this problem, as it takes into consideration the bargaining power of the different echelons. We show that this framework outperforms classical alternatives. Research limitations/implications: The allocation of the overall supply chain profit is analyzed from a static perspective. Considering the dynamic nature of the problem would be an interesting next step. Practical implications: We provide evidence of drawbacks derived from classical solutions to the profit allocation problem. Real-world collaborative supply chains need of robust mechanisms like the one tackled in this work to align incentives from the various actors. Originality/value: Adopting an efficient collaborative solution is a major challenge for supply chains, since it is a wide and complex process that requires an appropriate scheme. Within this framework, profit allocation is essential.

  2. Reflections on the conceptualization and operationalization of a set-theoretic approach to employee motivation and performance research

    Directory of Open Access Journals (Sweden)

    James Christopher Ryan

    2017-01-01

    Full Text Available The current commentary offers a reflection on the conceptualizations of Lee and Raschke's (2016 proposal for a set-theoretic approach to employee motivation and organizational performance. The commentary is informed by the current author's operationalization of set-theoretic research on employee motivation which occurred contemporaneously to the work of Lee and Raschke. Observations on the state of current research on employee motivation, development of motivation theory and future directions of set-theoretic approaches to employee motivation and performance are offered.

  3. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel

    Science.gov (United States)

    Lin, L.; Luo, X.; Qin, F.; Yang, J.

    2018-03-01

    As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.

  4. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach

    Directory of Open Access Journals (Sweden)

    David S Cobden

    2010-08-01

    Full Text Available David S Cobden1, Louis W Niessen2, Frans FH Rutten1, W Ken Redekop11Department of Health Policy and Management, Section of Health Economics – Medical Technology Assessment (HE-MTA, Erasmus MC, Erasmus University Rotterdam, The Netherlands; 2Department of International Health, Johns Hopkins University School of Public Health, Johns Hopkins Medical Institutions, Baltimore, MD, USAAims: While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis.Methods: We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM and oral (OAD medications. Two analyses were performed, one which ignored adherence (analysis 1 and one which incorporated it (analysis 2. Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios.Results: In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY. This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM.Conclusions: Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be ­considered in health care decision-making. Future work on the impact of adherence on health

  5. Quantum noise in the mirror–field system: A field theoretic approach

    International Nuclear Information System (INIS)

    Hsiang, Jen-Tsung; Wu, Tai-Hung; Lee, Da-Shin; King, Sun-Kun; Wu, Chun-Hsien

    2013-01-01

    We revisit the quantum noise problem in the mirror–field system by a field-theoretic approach. Here a perfectly reflecting mirror is illuminated by a single-mode coherent state of the massless scalar field. The associated radiation pressure is described by a surface integral of the stress-tensor of the field. The read-out field is measured by a monopole detector, from which the effective distance between the detector and mirror can be obtained. In the slow-motion limit of the mirror, this field-theoretic approach allows to identify various sources of quantum noise that all in all leads to uncertainty of the read-out measurement. In addition to well-known sources from shot noise and radiation pressure fluctuations, a new source of noise is found from field fluctuations modified by the mirror’s displacement. Correlation between different sources of noise can be established in the read-out measurement as the consequence of interference between the incident field and the field reflected off the mirror. In the case of negative correlation, we found that the uncertainty can be lowered than the value predicted by the standard quantum limit. Since the particle-number approach is often used in quantum optics, we compared results obtained by both approaches and examine its validity. We also derive a Langevin equation that describes the stochastic dynamics of the mirror. The underlying fluctuation–dissipation relation is briefly mentioned. Finally we discuss the backreaction induced by the radiation pressure. It will alter the mean displacement of the mirror, but we argue this backreaction can be ignored for a slowly moving mirror. - Highlights: ► The quantum noise problem in the mirror–field system is re-visited by a field-theoretic approach. ► Other than the shot noise and radiation pressure noise, we show there are new sources of noise and correlation between them. ► The noise correlations can be used to suppress the overall quantum noise on the mirror.

  6. Quantum noise in the mirror-field system: A field theoretic approach

    Energy Technology Data Exchange (ETDEWEB)

    Hsiang, Jen-Tsung, E-mail: cosmology@gmail.com [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); Wu, Tai-Hung [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); Lee, Da-Shin, E-mail: dslee@mail.ndhu.edu.tw [Department of Physics, National Dong-Hwa University, Hua-lien, Taiwan, ROC (China); King, Sun-Kun [Institutes of Astronomy and Astrophysics, Academia Sinica, Taipei, Taiwan, ROC (China); Wu, Chun-Hsien [Department of Physics, Soochow University, Taipei, Taiwan, ROC (China)

    2013-02-15

    We revisit the quantum noise problem in the mirror-field system by a field-theoretic approach. Here a perfectly reflecting mirror is illuminated by a single-mode coherent state of the massless scalar field. The associated radiation pressure is described by a surface integral of the stress-tensor of the field. The read-out field is measured by a monopole detector, from which the effective distance between the detector and mirror can be obtained. In the slow-motion limit of the mirror, this field-theoretic approach allows to identify various sources of quantum noise that all in all leads to uncertainty of the read-out measurement. In addition to well-known sources from shot noise and radiation pressure fluctuations, a new source of noise is found from field fluctuations modified by the mirror's displacement. Correlation between different sources of noise can be established in the read-out measurement as the consequence of interference between the incident field and the field reflected off the mirror. In the case of negative correlation, we found that the uncertainty can be lowered than the value predicted by the standard quantum limit. Since the particle-number approach is often used in quantum optics, we compared results obtained by both approaches and examine its validity. We also derive a Langevin equation that describes the stochastic dynamics of the mirror. The underlying fluctuation-dissipation relation is briefly mentioned. Finally we discuss the backreaction induced by the radiation pressure. It will alter the mean displacement of the mirror, but we argue this backreaction can be ignored for a slowly moving mirror. - Highlights: Black-Right-Pointing-Pointer The quantum noise problem in the mirror-field system is re-visited by a field-theoretic approach. Black-Right-Pointing-Pointer Other than the shot noise and radiation pressure noise, we show there are new sources of noise and correlation between them. Black-Right-Pointing-Pointer The noise

  7. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyonam; Kim, So Young; Yim, Mansung [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Schneider, Erich [Univ. of Texas at Austin, Texas (United States)

    2014-05-15

    As individuals with authorized access to a facility and system who use their trusted position for unauthorized purposes, insiders are able to take advantage of their access rights and knowledge of a facility to bypass dedicated security measures. They can also capitalize on their knowledge to exploit any vulnerabilities in safety-related systems, with cyber security of safety-critical information technology systems offering an important example of the 3S interface. While this Probabilistic Risk Assessment (PRA) approach is appropriate for describing fundamentally random events like component failure of a safety system, it does not capture the adversary's intentions, nor does it account for adversarial response and adaptation to defensive investments. To address these issues of intentionality and interactions, this study adopts a game theoretic approach. The interaction between defender and adversary is modeled as a two-person Stackelberg game. The optimal strategy of both players is found from the equilibrium of this game. A defender strategy consists of a set of design modifications and/or post-construction security upgrades. An attacker strategy involves selection of a target as well as a pathway to that target. In this study, application of the game theoretic approach is demonstrated using a simplified test case problem. Novel to our approach is the modeling of insider threat that affects the non-detection probability of an adversary. The game-theoretic approach has the advantage of modelling an intelligent adversary who has an intention and complete knowledge of the facility. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three groups of adversary paths assisted by insiders and derived the largest insider threat in terms of the budget for security upgrades. Certainly more work needs to be done to

  8. A Game Theoretic Approach to Nuclear Security Analysis against Insider Threat

    International Nuclear Information System (INIS)

    Kim, Kyonam; Kim, So Young; Yim, Mansung; Schneider, Erich

    2014-01-01

    As individuals with authorized access to a facility and system who use their trusted position for unauthorized purposes, insiders are able to take advantage of their access rights and knowledge of a facility to bypass dedicated security measures. They can also capitalize on their knowledge to exploit any vulnerabilities in safety-related systems, with cyber security of safety-critical information technology systems offering an important example of the 3S interface. While this Probabilistic Risk Assessment (PRA) approach is appropriate for describing fundamentally random events like component failure of a safety system, it does not capture the adversary's intentions, nor does it account for adversarial response and adaptation to defensive investments. To address these issues of intentionality and interactions, this study adopts a game theoretic approach. The interaction between defender and adversary is modeled as a two-person Stackelberg game. The optimal strategy of both players is found from the equilibrium of this game. A defender strategy consists of a set of design modifications and/or post-construction security upgrades. An attacker strategy involves selection of a target as well as a pathway to that target. In this study, application of the game theoretic approach is demonstrated using a simplified test case problem. Novel to our approach is the modeling of insider threat that affects the non-detection probability of an adversary. The game-theoretic approach has the advantage of modelling an intelligent adversary who has an intention and complete knowledge of the facility. In this study, we analyzed the expected adversarial path and security upgrades with a limited budget with insider threat modeled as increasing the non-detection probability. Our test case problem categorized three groups of adversary paths assisted by insiders and derived the largest insider threat in terms of the budget for security upgrades. Certainly more work needs to be done to

  9. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  10. Bootstrap consistency for general semiparametric M-estimation

    KAUST Repository

    Cheng, Guang

    2010-10-01

    Consider M-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric M-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the M-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the M-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general onclusions hold, in particular, when the nuisance parameter is not estimable at root-n rate, and apply to a broad class of bootstrap methods with exchangeable ootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. © Institute of Mathematical Statistics, 2010.

  11. Methodology for estimating biomass energy potential and its application to Colombia

    International Nuclear Information System (INIS)

    Gonzalez-Salazar, Miguel Angel; Morini, Mirko; Pinelli, Michele; Spina, Pier Ruggero; Venturini, Mauro; Finkenrath, Matthias; Poganietz, Witold-Roger

    2014-01-01

    Highlights: • Methodology to estimate the biomass energy potential and its uncertainty at a country level. • Harmonization of approaches and assumptions in existing assessment studies. • The theoretical and technical biomass energy potential in Colombia are estimated in 2010. - Abstract: This paper presents a methodology to estimate the biomass energy potential and its associated uncertainty at a country level when quality and availability of data are limited. The current biomass energy potential in Colombia is assessed following the proposed methodology and results are compared to existing assessment studies. The proposed methodology is a bottom-up resource-focused approach with statistical analysis that uses a Monte Carlo algorithm to stochastically estimate the theoretical and the technical biomass energy potential. The paper also includes a proposed approach to quantify uncertainty combining a probabilistic propagation of uncertainty, a sensitivity analysis and a set of disaggregated sub-models to estimate reliability of predictions and reduce the associated uncertainty. Results predict a theoretical energy potential of 0.744 EJ and a technical potential of 0.059 EJ in 2010, which might account for 1.2% of the annual primary energy production (4.93 EJ)

  12. Online Estimation of Peak Power Capability of Li-Ion Batteries in Electric Vehicles by a Hardware-in-Loop Approach

    Directory of Open Access Journals (Sweden)

    Fengchun Sun

    2012-05-01

    Full Text Available Battery peak power capability estimations play an important theoretical role for the proper use of the battery in electric vehicles. To address the failures in relaxation effects and real-time ability performance, neglecting the battery’s design limits and other issues of the traditional peak power capability calculation methods, a new approach based on the dynamic electrochemical-polarization (EP battery model, taking into consideration constraints of current, voltage, state of charge (SoC and power is proposed. A hardware-in-the-loop (HIL system is built for validating the online model-based peak power capability estimation approach of batteries used in hybrid electric vehicles (HEVs and a HIL test based on the Federal Urban Driving Schedules (FUDS is used to verify and evaluate its real-time computation performance, reliability and robustness. The results show the proposed approach gives a more accurate estimate compared with the hybrid pulse power characterization (HPPC method, avoiding over-charging or over-discharging and providing a powerful guarantee for the optimization of HEVs power systems. Furthermore, the HIL test provides valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms.

  13. Theoretical orientations in environmental planning: An inquiry into alternative approaches

    Science.gov (United States)

    Briassoulis, Helen

    1989-07-01

    In the process of devising courses of action to resolve problems arising at the society-environment interface, a variety of planning approaches are followed, whose adoption is influenced by—among other things—the characteristics of environmental problems, the nature of the decision-making context, and the intellectual traditions of the disciplines contributing to the study of these problems. This article provides a systematic analysis of six alternative environmental planning approaches—comprehensive/rational, incremental, adaptive, contingency, advocacy, and participatory/consensual. The relative influence of the abovementioned factors is examined, the occurrence of these approaches in real-world situations is noted, and their environmental soundness and political realism is evaluated. Because of the disparity between plan formulation and implementation and between theoretical form and empirical reality, a synthetic view of environmental planning approaches is taken and approaches in action are identified, which characterize the totality of the planning process from problem definition to plan implementation, as well as approaches in the becoming, which may be on the horizon of environmental planning of tomorrow. The suggested future research directions include case studies to verify and detail the presence of the approaches discussed, developing measures of success of a given approach in a given decision setting, and an intertemporal analysis of environmental planning approaches.

  14. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  15. Theoretical and experimental estimation of the lead equivalent for some materials used in finishing of diagnostic x-ray rooms in Syria

    International Nuclear Information System (INIS)

    Shwekani, R.; Suman, H.; Takeyeddin, M.; Suleiman, J.

    2003-11-01

    This work aimed at estimating the lead equivalent values for finishing materials, which are frequently used in Syria. These materials are ceramic and marble. In the past, many studies were performed to estimate the lead equivalent values for different types of bricks, which are widely used in Syria. Therefore, this work could be considered as a follow up in order to be able to estimate the structural shielding of diagnostic X-ray rooms and accurately perform the shielding calculations to reduce unnecessary added shields. The work was done in two ways, theoretical using MCNP computer code and experimental in the secondary standard laboratory. The theoretical work was focused on generalizing the results scope to cover the real existing variations in the structure of the materials used in the finishing or the variations in the X-ray machines. Therefore, quantifying different sources of errors were strongly focused on using the methodology of sensitivity analysis. While, the experiment measurements were performed to make sure that their results will be within the error range produced by the theoretical study. The obtained results showed a strong correlation between theoretical and experimental data. (author)

  16. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate

    Science.gov (United States)

    Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.

    2017-08-01

    The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.

  17. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  18. Information Ergonomics A theoretical approach and practical experience in transportation

    CERN Document Server

    Sandl, Peter

    2012-01-01

    The variety and increasing availability of hypermedia information systems, which are used in stationary applications like operators’ consoles as well as mobile systems, e.g. driver information and navigation systems in automobiles form a foundation for the mediatization of the society. From the human engineering point of view this development and the ensuing increased importance of information systems for economic and private needs require careful deliberation of the derivation and application of ergonomics methods particularly in the field of information systems. This book consists of two closely intertwined parts. The first, theoretical part defines the concept of an information system, followed by an explanation of action regulation as well as cognitive theories to describe man information system interaction. A comprehensive description of information ergonomics concludes the theoretical approach. In the second, practically oriented part of this book authors from industry as well as from academic institu...

  19. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    Science.gov (United States)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  20. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    Science.gov (United States)

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  1. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu

    2010-01-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  2. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo

    2010-10-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  3. EVOLUTION OF THEORETICAL APPROACHES TO THE DEFINITION OF THE CATEGORY “PERSONNEL POTENTIAL”

    Directory of Open Access Journals (Sweden)

    Аlexandra Deshchenko

    2016-02-01

    Full Text Available The article describes the evolution of theoretical approaches to definition of the category «personnel potential» based on the analysis of approaches to definition of the conceptual apparatus of labor Economics, including such categories as: labor force, labor resources, labor potential, human resources, human capital, human capital different authors. The analysis of the evolution of the terms in accordance with the stages of development of a society.

  4. Theoretical estimates of maximum fields in superconducting resonant radio frequency cavities: stability theory, disorder, and laminates

    Science.gov (United States)

    Liarte, Danilo B.; Posen, Sam; Transtrum, Mark K.; Catelani, Gianluigi; Liepe, Matthias; Sethna, James P.

    2017-03-01

    Theoretical limits to the performance of superconductors in high magnetic fields parallel to their surfaces are of key relevance to current and future accelerating cavities, especially those made of new higher-T c materials such as Nb3Sn, NbN, and MgB2. Indeed, beyond the so-called superheating field {H}{sh}, flux will spontaneously penetrate even a perfect superconducting surface and ruin the performance. We present intuitive arguments and simple estimates for {H}{sh}, and combine them with our previous rigorous calculations, which we summarize. We briefly discuss experimental measurements of the superheating field, comparing to our estimates. We explore the effects of materials anisotropy and the danger of disorder in nucleating vortex entry. Will we need to control surface orientation in the layered compound MgB2? Can we estimate theoretically whether dirt and defects make these new materials fundamentally more challenging to optimize than niobium? Finally, we discuss and analyze recent proposals to use thin superconducting layers or laminates to enhance the performance of superconducting cavities. Flux entering a laminate can lead to so-called pancake vortices; we consider the physics of the dislocation motion and potential re-annihilation or stabilization of these vortices after their entry.

  5. [Systemic inflammation: theoretical and methodological approaches to description of general pathological process model. Part 3. Backgroung for nonsyndromic approach].

    Science.gov (United States)

    Gusev, E Yu; Chereshnev, V A

    2013-01-01

    Theoretical and methodological approaches to description of systemic inflammation as general pathological process are discussed. It is shown, that there is a need of integration of wide range of types of researches to develop a model of systemic inflammation.

  6. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  7. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    Science.gov (United States)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  8. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  9. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Directory of Open Access Journals (Sweden)

    Taylor Chomiak

    2009-11-01

    Full Text Available The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space.In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77. Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6.These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  10. What is the optimal value of the g-ratio for myelinated fibers in the rat CNS? A theoretical approach.

    Science.gov (United States)

    Chomiak, Taylor; Hu, Bin

    2009-11-13

    The biological process underlying axonal myelination is complex and often prone to injury and disease. The ratio of the inner axonal diameter to the total outer diameter or g-ratio is widely utilized as a functional and structural index of optimal axonal myelination. Based on the speed of fiber conduction, Rushton was the first to derive a theoretical estimate of the optimal g-ratio of 0.6 [1]. This theoretical limit nicely explains the experimental data for myelinated axons obtained for some peripheral fibers but appears significantly lower than that found for CNS fibers. This is, however, hardly surprising given that in the CNS, axonal myelination must achieve multiple goals including reducing conduction delays, promoting conduction fidelity, lowering energy costs, and saving space. In this study we explore the notion that a balanced set-point can be achieved at a functional level as the micro-structure of individual axons becomes optimized, particularly for the central system where axons tend to be smaller and their myelin sheath thinner. We used an intuitive yet novel theoretical approach based on the fundamental biophysical properties describing axonal structure and function to show that an optimal g-ratio can be defined for the central nervous system (approximately 0.77). Furthermore, by reducing the influence of volume constraints on structural design by about 40%, this approach can also predict the g-ratio observed in some peripheral fibers (approximately 0.6). These results support the notion of optimization theory in nervous system design and construction and may also help explain why the central and peripheral systems have evolved different g-ratios as a result of volume constraints.

  11. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    Science.gov (United States)

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  12. Evaluations of carbon fluxes estimated by top-down and bottom-up approaches

    Science.gov (United States)

    Murakami, K.; Sasai, T.; Kato, S.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.; Nasahara, K.; Matsunaga, T.

    2013-12-01

    There are two types of estimating carbon fluxes using satellite observation data, and these are referred to as top-down and bottom-up approaches. Many uncertainties are however still remain in these carbon flux estimations, because the true values of carbon flux are still unclear and estimations vary according to the type of the model (e.g. a transport model, a process based model) and input data. The CO2 fluxes in these approaches are estimated by using different satellite data such as the distribution of CO2 concentration in the top-down approach and the land cover information (e.g. leaf area, surface temperature) in the bottom-up approach. The satellite-based CO2 flux estimations with reduced uncertainty can be used efficiently for identifications of large emission area and carbon stocks of forest area. In this study, we evaluated the carbon flux estimates from two approaches by comparing with each other. The Greenhouse gases Observing SATellite (GOSAT) has been observing atmospheric CO2 concentrations since 2009. GOSAT L4A data product is the monthly CO2 flux estimations for 64 sub-continental regions and is estimated by using GOSAT FTS SWIR L2 XCO2 data and atmospheric tracer transport model. We used GOSAT L4A CO2 flux as top-down approach estimations and net ecosystem productions (NEP) estimated by the diagnostic type biosphere model BEAMS as bottom-up approach estimations. BEAMS NEP is only natural land CO2 flux, so we used GOSAT L4A CO2 flux after subtraction of anthropogenic CO2 emissions and oceanic CO2 flux. We compared with two approach in temperate north-east Asia region. This region is covered by grassland and crop land (about 60 %), forest (about 20 %) and bare ground (about 20 %). The temporal variation for one year period was indicated similar trends between two approaches. Furthermore we show the comparison of CO2 flux estimations in other sub-continental regions.

  13. Success Determination by Innovation: A Theoretical Approach in Marketing

    Directory of Open Access Journals (Sweden)

    Raj Kumar Gautam

    2012-10-01

    Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.

  14. Success Determination by Innovation: A Theoretical Approach in Marketing

    Directory of Open Access Journals (Sweden)

    Raj Kumar Gautam

    2012-11-01

    Full Text Available The paper aims at to identify the main issues in the marketing which needs immediate attention of the marketers. The importance of innovation in the marketing has also been highlighted and marketing mix have been related to innovative and creative ideas. The study is based on the secondary data, various research papers, articles has been studied to develop a innovative approach in the marketing. Marketing innovative ideas relating to business lead generation, product, price, distribution, promotion of product, and revenue generation have been highlighted in the paper. All the suggestions are theoretical and may have relevance and implication to the marketers.

  15. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  16. MEDIATIC NARRATIVES AND IDENTIFICATION PROCESSES. A THEORETICAL AND METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Salomé Sola Morales

    2013-04-01

    Full Text Available This article, theoretical and argumentative, lays the conceptual and methodological basis for the study of the link between identity and narrative media identification processes undertaken by individuals and groups. Thus, the setting national identifications, professional, religious or gender is here proposed as the result of the dialectic between the 'media narrative identity', which the media produce and convey, and identification processes that individuals and groups perform. Furthermore we propose the use of the biographical method as a form of empirical approach to psycho-social phenomenon

  17. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  18. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  19. THE REPURCHASE OF SHARES - ANOTHER FORM OF REWARDING INVESTORS - A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    PRISACARIU Maria

    2013-06-01

    Full Text Available Among the shareholder remuneration policies, in recent years, share repurchases are gaining more and more ground. Like any other phenomenon or financial practice, repurchases lacked no theories to explain their motivation, effects and controversies. This paper proposes a theoretical approach to the subject by summarizing relevant research in order to highlight the motivations behind this decision and its implications.

  20. Evolution of the Theoretical Approaches to Disclosing the Economic Substance of Accumulation of Capital

    Directory of Open Access Journals (Sweden)

    Yemets Vadym V.

    2016-05-01

    Full Text Available The article proposes a classification for periods of evolution of theoretical approaches to disclosing the economic substance of accumulation of capital, taking into account the civilizational approach to the development of society. The author has proposed five stages in the evolution of theoretical approaches, which are closely related to the development of economy and stipulate dominance of a certain form of accumulation of capital. So, the first stage (time period B.C. – the 5th Century is referred to as Individual-social significance of accumulation of capital; the second stage (from the 6th century to the 16th century – Accumulation of monetary capitals; the third stage (from the mid-17th century until the end of the 18th century – Industrialproduction accumulation of capital; the fourth stage (from the mid-19th century until the 70s of the 20th century – Investment-oriented accumulation of capital; the fifth stage (from the 70s of the 20th century up to the current period – Globally-intensive accumulation of capital.

  1. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  2. Merged ontology for engineering design: Contrasting empirical and theoretical approaches to develop engineering ontologies

    DEFF Research Database (Denmark)

    Ahmed, Saeema; Storga, M

    2009-01-01

    to developing the ontology engineering design integrated taxonomies (EDIT) with a theoretical approach in which concepts and relations are elicited from engineering design theories ontology (DO) The limitations and advantages of each approach are discussed. The research methodology adopted is to map......This paper presents a comparison of two previous and separate efforts to develop an ontology in the engineering design domain, together with an ontology proposal from which ontologies for a specific application may be derived. The research contrasts an empirical, user-centered approach...

  3. Estimating Potential GDP for the Romanian Economy and Assessing the Sustainability of Economic Growth: A Multivariate Filter Approach

    Directory of Open Access Journals (Sweden)

    Dan Armeanu

    2015-03-01

    Full Text Available In the current context of economic recovery and rebalancing, the necessity of modelling and estimating the potential output and output gap emerges in order to assess the quality and sustainability of economic growth, the monetary and fiscal policies, as well as the impact of business cycles. Despite the importance of potential GDP and the output gap, there are difficulties in reliably estimating them, as many of the models proposed in the economic literature are calibrated for developed economies and are based on complex macroeconomic relationships and a long history of robust data, while emerging economies exhibit high volatility. The object of this study is to develop a model in order to estimate the potential GDP and output gap and to assess the sustainability of projected growth using a multivariate filter approach. This trend estimation technique is the newest approach proposed by the economic literature and has gained wide acceptance with researchers and practitioners alike, while also being used by the IMF for Romania. The paper will be structured as follows. We first discuss the theoretical background of the model. The second section focuses on an analysis of the Romanian economy for the 1995–2013 time frame, while also providing a forecast for 2014–2017 and an assessment of the sustainability of Romania’s economic growth. The third section sums up the results and concludes.

  4. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  5. Estimating the elasticity of trade: the trade share approach

    OpenAIRE

    Mauro Lanati

    2013-01-01

    Recent theoretical work on international trade emphasizes the importance of trade elasticity as the fundamental statistic needed to conduct welfare analysis. Eaton and Kortum (2002) proposed a two-step method to estimate this parameter, where exporter fixed effects are regressed on proxies for technology and wages. Within the same Ricardian model of trade, the trade share provides an alternative source of identication for the elasticity of trade. Following Santos Silva and Tenreyro (2006) bot...

  6. Topological charge on the lattice: a field theoretical view of the geometrical approach

    International Nuclear Information System (INIS)

    Rastelli, L.; Rossi, P.; Vicari, E.

    1997-01-01

    We construct sequences of ''field theoretical'' lattice topological charge density operators which formally approach geometrical definitions in 2D CP N-1 models and 4D SU(N) Yang-Mills theories. The analysis of these sequences of operators suggests a new way of looking at the geometrical method, showing that geometrical charges can be interpreted as limits of sequences of field theoretical (analytical) operators. In perturbation theory, renormalization effects formally tend to vanish along such sequences. But, since the perturbative expansion is asymptotic, this does not necessarily lead to well-behaved geometrical limits. It indeed leaves open the possibility that non-perturbative renormalizations survive. (orig.)

  7. Factors determining early internationalization of entrepreneurial SMEs: Theoretical approach

    Directory of Open Access Journals (Sweden)

    Agne Matiusinaite

    2015-12-01

    Full Text Available Purpose – This study extends the scientific discussion of early internationalization of SMEs. The main purpose of this paper – to develop a theoretical framework to investigate factors determining early internationalization of international new ventures. Design/methodology/approach – The conceptual framework is built on the analysis and synthesis of scientific literature. Findings – This paper presents different factors, which determine early internationalization of international new ventures. These factors are divided to entrepreneurial, organizational and contextual factors. We argue that early internationalization of international new ventures is defined by entrepreneurial characteristics and previous experience of the entrepreneur, opportunities recognition and exploitation, risk tolerance, specific of the organization, involvement into networks and contextual factors. Study proved that only interaction between factors and categories has an effect for business development and successful implementation of early internationalization. Research limitations/implications – The research was conducted on the theoretical basis of scientific literature. The future studies could include a practical confirmation or denial of such allocation of factors. Originality/value – The originality of this study lies in the finding that factor itself has limited effect to early internationalization. Only the interoperability of categories and factors gives a positive impact on early internationalization of entrepreneurial SMEs.

  8. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  9. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  10. Theoretical and numerical investigations of TAP experiments. New approaches for variable pressure conditions

    Energy Technology Data Exchange (ETDEWEB)

    Senechal, U.; Breitkopf, C. [Technische Univ. Dresden (Germany). Inst. fuer Energietechnik

    2011-07-01

    Temporal analysis of products (TAP) is a valuable tool for characterization of porous catalytic structures. Established TAP-modeling requires a spatially constant diffusion coefficient and neglect convective flows, which is only valid in Knudsen diffusion regime. Therefore in experiments, the number of molecules per pulse must be chosen accordingly. New approaches for variable process conditions are highly required. Thus, a new theoretical model is developed for estimating the number of molecules per pulse to meet these requirements under any conditions and at any time. The void volume is calculated as the biggest sphere fitting between three pellets. The total number of pulsed molecules is assumed to fill the first void volume at the inlet immediately. Molecule numbers from these calculations can be understood as maximum possible molecules at any time in the reactor to be in Knudsen diffusion regime, i.e., above the Knudsen number of 2. Moreover, a new methodology for generating a full three-dimensional geometrical representation of beds is presented and used for numerical simulations to investigate spatial effects. Based on a freely available open-source game physics engine library (BULLET), beds of arbitrary-sized pellets can be generated and transformed to CFD-usable geometry. In CFD-software (ANSYS CFX registered) a transient diffusive transport equation with time-dependent inlet boundary conditions is solved. Three different pellet diameters were investigated with 1e18 molecules per pulse, which is higher than the limit from the theoretical calculation. Spatial and temporal distributions of transported species show regions inside the reactor, where non-Knudsen conditions exist. From this results, the distance from inlet can be calculated where the theoretical pressure limit (Knudsen number equals 2) is obtained, i.e., from this point to the end of the reactor Knudsen regime can be assumed. Due to linear dependency of pressure and concentration (assuming ideal

  11. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  12. Theoretical Approaches to Lignin Chemistry

    OpenAIRE

    Shevchenko, Sergey M.

    1994-01-01

    A critical review is presented of the applications of theoretical methods to the studies of the structure and chemical reactivity of lignin, including simulation of macromolecular properties, conformational calculations, quantum chemical analyses of electronic structure, spectra and chemical reactivity. Modern concepts of spatial organization and chemical reactivity of lignins are discussed.

  13. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  14. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  15. Recent Theoretical Approaches to Minimal Artificial Cells

    Directory of Open Access Journals (Sweden)

    Fabio Mavelli

    2014-05-01

    Full Text Available Minimal artificial cells (MACs are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes… play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell systems.

  16. Field theoretical approach to proton-nucleus reactions: II-Multiple-step excitation process

    International Nuclear Information System (INIS)

    Eiras, A.; Kodama, T.; Nemes, M.

    1989-01-01

    A field theoretical formulation to multiple step excitation process in proton-nucleus collision within the context of a relativistic eikonal approach is presented. A closed form expression for the double differential cross section can be obtained whose structure is very simple and makes the physics transparent. Glauber's formulation of the same process is obtained as a limit of ours and the necessary approximations are studied and discussed. (author) [pt

  17. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    Science.gov (United States)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution

  18. Annual Gross Primary Production from Vegetation Indices: A Theoretically Sound Approach

    Directory of Open Access Journals (Sweden)

    María Amparo Gilabert

    2017-02-01

    Full Text Available A linear relationship between the annual gross primary production (GPP and a PAR-weighted vegetation index is theoretically derived from the Monteith equation. A semi-empirical model is then proposed to estimate the annual GPP from commonly available vegetation indices images and a representative PAR, which does not require actual meteorological data. A cross validation procedure is used to calibrate and validate the model predictions against reference data. As the calibration/validation process depends on the reference GPP product, the higher the quality of the reference GPP, the better the performance of the semi-empirical model. The annual GPP has been estimated at 1-km scale from MODIS NDVI and EVI images for eight years. Two reference data sets have been used: an optimized GPP product for the study area previously obtained and the MOD17A3 product. Different statistics show a good agreement between the estimates and the reference GPP data, with correlation coefficient around 0.9 and relative RMSE around 20%. The annual GPP is overestimated in semiarid areas and slightly underestimated in dense forest areas. With the above limitations, the model provides an excellent compromise between simplicity and accuracy for the calculation of long time series of annual GPP.

  19. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  20. A Game-theoretical Approach for Distributed Cooperative Control of Autonomous Underwater Vehicles

    KAUST Repository

    Lu, Yimeng

    2018-05-01

    This thesis explores a game-theoretical approach for underwater environmental monitoring applications. We first apply game-theoretical algorithm to multi-agent resource coverage problem in drifting environments. Furthermore, existing utility design and learning process of the algorithm are modified to fit specific constraints of underwater exploration/monitoring tasks. The revised approach can take the real scenario of underwater monitoring applications such as the effect of sea current, previous knowledge of the resource and occasional communications between agents into account, and adapt to them to reach better performance. As the motivation of this thesis is from real applications, in this work we emphasize highly on implementation phase. A ROS-Gazebo simulation environment was created for preparation of actual tests. The algorithms are implemented in simulating both the dynamics of vehicles and the environment. After that, a multi-agent underwater autonomous robotic system was developed for hardware test in real settings with local controllers to make their own decisions. These systems are used for testing above mentioned algorithms and future development of other underwater projects. After that, other works related to robotics during this thesis will be briefly mentioned, including contributions in MBZIRC robotics competition and distributed control of UAVs in an adversarial environment.

  1. An estimation of crude oil import demand in Turkey: Evidence from time-varying parameters approach

    International Nuclear Information System (INIS)

    Ozturk, Ilhan; Arisoy, Ibrahim

    2016-01-01

    The aim of this study is to model crude oil import demand and estimate the price and income elasticities of imported crude oil in Turkey based on a time-varying parameters (TVP) approach with the aim of obtaining accurate and more robust estimates of price and income elasticities. This study employs annual time series data of domestic oil consumption, real GDP, and oil price for the period 1966–2012. The empirical results indicate that both the income and price elasticities are in line with the theoretical expectations. However, the income elasticity is statistically significant while the price elasticity is statistically insignificant. The relatively high value of income elasticity (1.182) from this study suggests that crude oil import in Turkey is more responsive to changes in income level. This result indicates that imported crude oil is a normal good and rising income levels will foster higher consumption of oil based equipments, vehicles and services by economic agents. The estimated income elasticity of 1.182 suggests that imported crude oil consumption grows at a higher rate than income. This in turn reduces oil intensity over time. Therefore, crude oil import during the estimation period is substantially driven by income. - Highlights: • We estimated the price and income elasticities of imported crude oil in Turkey. • Income elasticity is statistically significant and it is 1.182. • The price elasticity is statistically insignificant. • Crude oil import in Turkey is more responsive to changes in income level. • Crude oil import during the estimation period is substantially driven by income.

  2. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  3. A bias correction for covariance estimators to improve inference with generalized estimating equations that use an unstructured correlation matrix.

    Science.gov (United States)

    Westgate, Philip M

    2013-07-20

    Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR-1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite-sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd.

  4. The decays Psub(c) -> VP in the group theoretical and quark diagrammatic approaches

    International Nuclear Information System (INIS)

    Tuan, S.F.; Xiaoyuan Li.

    1983-08-01

    Decays of charmed meson into one vector meson and one pseudoscalar meson Psub(c) -> VP in both the group theoretical and quark diagrammatic approaches are considered. A complete decay amplitude analysis is given. The present available experimental data can be accomodated if the contributions from exotic final states and exotic piece of weak Hamiltonian are also taken into account. (orig.)

  5. Passing Decisions in Football: Introducing an Empirical Approach to Estimating the Effects of Perceptual Information and Associative Knowledge.

    Science.gov (United States)

    Steiner, Silvan

    2018-01-01

    The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented.

  6. Passing Decisions in Football: Introducing an Empirical Approach to Estimating the Effects of Perceptual Information and Associative Knowledge

    Directory of Open Access Journals (Sweden)

    Silvan Steiner

    2018-03-01

    Full Text Available The importance of various information sources in decision-making in interactive team sports is debated. While some highlight the role of the perceptual information provided by the current game context, others point to the role of knowledge-based information that athletes have regarding their team environment. Recently, an integrative perspective considering the simultaneous involvement of both of these information sources in decision-making in interactive team sports has been presented. In a theoretical example concerning passing decisions, the simultaneous involvement of perceptual and knowledge-based information has been illustrated. However, no precast method of determining the contribution of these two information sources empirically has been provided. The aim of this article is to bridge this gap and present a statistical approach to estimating the effects of perceptual information and associative knowledge on passing decisions. To this end, a sample dataset of scenario-based passing decisions is analyzed. This article shows how the effects of perceivable team positionings and athletes' knowledge about their fellow team members on passing decisions can be estimated. Ways of transfering this approach to real-world situations and implications for future research using more representative designs are presented.

  7. Theoretical approach to the destruction or sterilization of drugs in aqueous solution

    International Nuclear Information System (INIS)

    Slegers, Catherine; Tilquin, Bernard

    2005-01-01

    Two novel applications in the radiation processing of aqueous solutions of drugs are the sterilization of injectable drugs and the decontamination of hospital wastewaters by ionizing radiation. The parameters influencing the destruction of the drug in aqueous solutions are studied with a computer simulation program. This theoretical approach has revealed that the dose rate is the most important parameter that can be easily varied in order to optimize the destruction or the protection of the drug

  8. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  9. Towards a capability approach to child growth: A theoretical framework.

    Science.gov (United States)

    Haisma, Hinke; Yousefzadeh, Sepideh; Boele Van Hensbroek, Pieter

    2018-04-01

    Child malnutrition is an important cause of under-5 mortality and morbidity around the globe. Despite the partial success of (inter)national efforts to reduce child mortality, under-5 mortality rates continue to be high. The multidimensional approaches of the Sustainable Development Goals may suggest new directions for rethinking strategies for reducing child mortality and malnutrition. We propose a theoretical framework for developing a "capability" approach to child growth. The current child growth monitoring practices are based on 2 assumptions: (a) that anthropometric and motor development measures are the appropriate indicators; and (b) that child growth can be assessed using a single universal standard that is applicable around the world. These practices may be further advanced by applying a capability approach to child growth, whereby growth is redefined as the achievement of certain capabilities (of society, parents, and children). This framework is similar to the multidimensional approach to societal development presented in the seminal work of Amartya Sen. To identify the dimensions of healthy child growth, we draw upon theories from the social sciences and evolutionary biology. Conceptually, we consider growth as a plural space and propose assessing growth by means of a child growth matrix in which the context is embedded in the assessment. This approach will better address the diversities and the inequalities in child growth. Such a multidimensional measure will have implications for interventions and policy, including prevention and counselling, and could have an impact on child malnutrition and mortality. © 2017 The Authors. Maternal and Child Nutrition Published by John Wiley & Sons, Ltd.

  10. Migration of antioxidants from polylactic acid films: A parameter estimation approach and an overview of the current mass transfer models.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda

    2018-01-01

    Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Information-Theoretic Approaches for Evaluating Complex Adaptive Social Simulation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Omitaomu, Olufemi A [ORNL; Ganguly, Auroop R [ORNL; Jiao, Yu [ORNL

    2009-01-01

    In this paper, we propose information-theoretic approaches for comparing and evaluating complex agent-based models. In information theoretic terms, entropy and mutual information are two measures of system complexity. We used entropy as a measure of the regularity of the number of agents in a social class; and mutual information as a measure of information shared by two social classes. Using our approaches, we compared two analogous agent-based (AB) models developed for regional-scale social-simulation system. The first AB model, called ABM-1, is a complex AB built with 10,000 agents on a desktop environment and used aggregate data; the second AB model, ABM-2, was built with 31 million agents on a highperformance computing framework located at Oak Ridge National Laboratory, and fine-resolution data from the LandScan Global Population Database. The initializations were slightly different, with ABM-1 using samples from a probability distribution and ABM-2 using polling data from Gallop for a deterministic initialization. The geographical and temporal domain was present-day Afghanistan, and the end result was the number of agents with one of three behavioral modes (proinsurgent, neutral, and pro-government) corresponding to the population mindshare. The theories embedded in each model were identical, and the test simulations focused on a test of three leadership theories - legitimacy, coercion, and representative, and two social mobilization theories - social influence and repression. The theories are tied together using the Cobb-Douglas utility function. Based on our results, the hypothesis that performance measures can be developed to compare and contrast AB models appears to be supported. Furthermore, we observed significant bias in the two models. Even so, further tests and investigations are required not only with a wider class of theories and AB models, but also with additional observed or simulated data and more comprehensive performance measures.

  12. NON-TERRITORIAL AUTONOMY IN RUSSIA: PRACTICAL IMPLICATIONS OF THEORETICAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Tatiana RUDNEVA

    2012-06-01

    Full Text Available Despite the theoretical possibility to use non-territorial autonomy as a mechanism through which ethnic groups can fulfil their right to selfdetermination along with other minority rights, not many states have been willing to put theory into practice. The article offers an explanation why wider applicability of NTA is problematic by arguing that the theory itself is not yet polished enough to be implemented. The study includes examination of both theoretical approaches and empirical data from a case study of an attempt to establish NTAs in the Russian Federation. The findings suggest that inconsistencies and unclarities in the theory do correlate with practical flaws of NTAs, which allows to suggest that when the theory is tested empirically, the reality reveals all the flaws of the theory. The results indicate that the concept of NTA needs further refinement and development to make it more practice-oriented and applicable. As the problem of minority rights is still to be dealt with, we also propose a model of global union of NTAs where each ethnic group is represented by a non-governmental organisation, which seems to be more applicable than the others, alongside a number of other mechanisms that are even more essential and universal and focus on defending basic human rights

  13. Investigations on Actuator Dynamics through Theoretical and Finite Element Approach

    Directory of Open Access Journals (Sweden)

    Somashekhar S. Hiremath

    2010-01-01

    Full Text Available This paper gives a new approach for modeling the fluid-structure interaction of servovalve component-actuator. The analyzed valve is a precision flow control valve-jet pipe electrohydraulic servovalve. The positioning of an actuator depends upon the flow rate from control ports, in turn depends on the spool position. Theoretical investigation is made for No-load condition and Load condition for an actuator. These are used in finite element modeling of an actuator. The fluid-structure-interaction (FSI is established between the piston and the fluid cavities at the piston end. The fluid cavities were modeled with special purpose hydrostatic fluid elements while the piston is modeled with brick elements. The finite element method is used to simulate the variation of cavity pressure, cavity volume, mass flow rate, and the actuator velocity. The finite element analysis is extended to study the system's linearized response to harmonic excitation using direct solution steady-state dynamics. It was observed from the analysis that the natural frequency of the actuator depends upon the position of the piston in the cylinder. This is a close match with theoretical and simulation results. The effect of bulk modulus is also presented in the paper.

  14. Theoretical approach in optimization of stability of the multicomponent solid waste form

    International Nuclear Information System (INIS)

    Raicevic, S.; Plecas, I.; Mandic, M.

    1998-01-01

    Chemical precipitation of radionuclides and their immobilization into the solid matrix represents an important approach in the radioactive wastewater treatment. Unfortunately, because of the complexity of the system, optimization of this process in terms of its efficacy and safety represents a serious practical problem, even in treatment of the monocomponent nuclear waste. This situation is additionally complicated in the case of the polycomponent nuclear waste because of the synergic effects of interactions between the radioactive components and the solid matrix. Recently, we have proposed a general theoretical approach for optimization of the process of precipitation and immobilization of metal impurities by the solid matrix. One of the main advantages of this approach represents the possibility of treatment of the multicomponent liquid waste, immobilized by the solid matrix. This approach was used here for investigation of the stability of the system hydroxyapatite (HAP) - Pb/Cd, which was selected as a model multicomponent waste system. In this analysis, we have used a structurally dependent term of the cohesive energy as a stability criterion. (author)

  15. A Game Theoretical Approach to Hacktivism: Is Attack Likelihood a Product of Risks and Payoffs?

    Science.gov (United States)

    Bodford, Jessica E; Kwan, Virginia S Y

    2018-02-01

    The current study examines hacktivism (i.e., hacking to convey a moral, ethical, or social justice message) through a general game theoretic framework-that is, as a product of costs and benefits. Given the inherent risk of carrying out a hacktivist attack (e.g., legal action, imprisonment), it would be rational for the user to weigh these risks against perceived benefits of carrying out the attack. As such, we examined computer science students' estimations of risks, payoffs, and attack likelihood through a game theoretic design. Furthermore, this study aims at constructing a descriptive profile of potential hacktivists, exploring two predicted covariates of attack decision making, namely, peer prevalence of hacking and sex differences. Contrary to expectations, results suggest that participants' estimations of attack likelihood stemmed solely from expected payoffs, rather than subjective risks. Peer prevalence significantly predicted increased payoffs and attack likelihood, suggesting an underlying descriptive norm in social networks. Notably, we observed no sex differences in the decision to attack, nor in the factors predicting attack likelihood. Implications for policymakers and the understanding and prevention of hacktivism are discussed, as are the possible ramifications of widely communicated payoffs over potential risks in hacking communities.

  16. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  17. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  18. An integrated theoretical and practical approach for teaching hydrogeology

    Science.gov (United States)

    Bonomi, Tullia; Fumagalli, Letizia; Cavallin, Angelo

    2013-04-01

    Hydrogeology as an earth science intersects the broader disciplines of geology, engineering, and environmental studies but it does not overlap fully with any of them. It is focused on its own range of problems and over time has developed a rich variety of methods and approaches. The resolution of many hydrogeological problems requires knowledge of elements of geology, hydraulics, physics and chemistry; moreover in recent years the knowledge of modelling techniques has become a necessary ability. Successful transfer of all this knowledge to the students depends on the breadth of material taught in courses, the natural skills of the students and any practical experience the students can obtain. In the Department of Earth and Environmental Sciences of the University of Milano-Bicocca, the teaching of hydrogeology is developed in three inter-related courses: 1) general hydrogeology, 2) applied hydrogeology, 3) groundwater pollution and remediation. The sequence focuses on both groundwater flux and contaminant transport, supplemented by workshops involving case studies and computer labs, which provide the students with practical translation of the theoretical aspects of the science into the world of work. A second key aspect of the program utilizes the students' skill at learning through online approaches, and this is done through three approaches: A) by developing the courses on a University e-learning platform that allows the students to download lectures, articles, and teacher comments, and to participate in online forums; B) by carring out exercises through computer labs where the student analyze and process hydrogeological data by means of different numerical codes, that in turn enable them to manage databases and to perform aquifer test analysis, geostatistical analysis, and flux and transport modelling both in the unsaturated and saturated zone. These exercises are of course preceded by theoretical lectures on codes and software, highlighting their features and

  19. Theoretical estimation of Photons flow rate Production in quark gluon interaction at high energies

    Science.gov (United States)

    Al-Agealy, Hadi J. M.; Hamza Hussein, Hyder; Mustafa Hussein, Saba

    2018-05-01

    photons emitted from higher energetic collisions in quark-gluon system have been theoretical studied depending on color quantum theory. A simple model for photons emission at quark-gluon system have been investigated. In this model, we use a quantum consideration which enhances to describing the quark system. The photons current rate are estimation for two system at different fugacity coefficient. We discussion the behavior of photons rate and quark gluon system properties in different photons energies with Boltzmann model. The photons rate depending on anisotropic coefficient : strong constant, photons energy, color number, fugacity parameter, thermal energy and critical energy of system are also discussed.

  20. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  1. A short course in quantum information theory an approach from theoretical physics

    CERN Document Server

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition...

  2. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  3. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  4. A theoretical approach to medication adherence for children and youth with psychiatric disorders.

    Science.gov (United States)

    Charach, Alice; Volpe, Tiziana; Boydell, Katherine M; Gearing, Robin E

    2008-01-01

    This article provides a theoretical review of treatment adherence for children and youth with psychiatric disorders where pharmacological agents are first-line interventions. Four empirically based models of health behavior are reviewed and applied to the sparse literature about medication adherence for children with attention-deficit/hyperactivity disorder and young people with first-episode psychosis. Three qualitative studies of medication use are summarized, and details from the first-person narratives are used to illustrate the theoretical models. These studies indicate, when taken together, that the clinical approach to addressing poor medication adherence in children and youth with psychiatric disorders should be guided by more than one theoretical model. Mental health experts should clarify beliefs, address misconceptions, and support exploration of alternative treatment options unless contraindicated. Recognizing the larger context of the family, allowing time for parents and children to change their attitudes, and offering opportunities for easy access to medication in the future are important ways of respecting patient preferences, while steering them toward best-evidence interventions. Future research using qualitative methods of inquiry to investigate parent, child, and youth experiences of mental health interventions should identify effective ways to improve treatment adherence.

  5. A review of the nurtured heart approach to parenting: evaluation of its theoretical and empirical foundations.

    Science.gov (United States)

    Hektner, Joel M; Brennan, Alison L; Brotherson, Sean E

    2013-09-01

    The Nurtured Heart Approach to parenting (NHA; Glasser & Easley, 2008) is summarized and evaluated in terms of its alignment with current theoretical perspectives and empirical evidence in family studies and developmental science. Originally conceived and promoted as a behavior management approach for parents of difficult children (i.e., with behavior disorders), NHA is increasingly offered as a valuable strategy for parents of any children, despite a lack of published empirical support. Parents using NHA are trained to minimize attention to undesired behaviors, provide positive attention and praise for compliance with rules, help children be successful by scaffolding and shaping desired behavior, and establish a set of clear rules and consequences. Many elements of the approach have strong support in the theoretical and empirical literature; however, some of the assumptions are more questionable, such as that negative child behavior can always be attributed to unintentional positive reinforcement by parents responding with negative attention. On balance, NHA appears to promote effective and validated parenting practices, but its effectiveness now needs to be tested empirically. © FPI, Inc.

  6. Child Language Acquisition: Contrasting Theoretical Approaches

    Science.gov (United States)

    Ambridge, Ben; Lieven, Elena V. M.

    2011-01-01

    Is children's language acquisition based on innate linguistic structures or built from cognitive and communicative skills? This book summarises the major theoretical debates in all of the core domains of child language acquisition research (phonology, word-learning, inflectional morphology, syntax and binding) and includes a complete introduction…

  7. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  8. Optimization of rootkit revealing system resources – A game theoretic approach

    Directory of Open Access Journals (Sweden)

    K. Muthumanickam

    2015-10-01

    Full Text Available Malicious rootkit is a collection of programs designed with the intent of infecting and monitoring the victim computer without the user’s permission. After the victim has been compromised, the remote attacker can easily cause further damage. In order to infect, compromise and monitor, rootkits adopt Native Application Programming Interface (API hooking technique. To reveal the hidden rootkits, current rootkit detection techniques check different data structures which hold reference to Native APIs. To verify these data structures, a large amount of system resources are required. This is because of the number of APIs in these data structures being quite large. Game theoretic approach is a useful mathematical tool to simulate network attacks. In this paper, a mathematical model is framed to optimize resource consumption using game-theory. To the best of our knowledge, this is the first work to be proposed for optimizing resource consumption while revealing rootkit presence using game theory. Non-cooperative game model is taken to discuss the problem. Analysis and simulation results show that our game theoretic model can effectively reduce the resource consumption by selectively monitoring the number of APIs in windows platform.

  9. A theoretical approach to photosynthetically active radiation silicon sensor

    International Nuclear Information System (INIS)

    Tamasi, M.J.L.; Martínez Bogado, M.G.

    2013-01-01

    This paper presents a theoretical approach for the development of low cost radiometers to measure photosynthetically active radiation (PAR). Two alternatives are considered: a) glass optical filters attached to a silicon sensor, and b) dielectric coating on a silicon sensor. The devices proposed are based on radiometers previously developed by the Argentine National Atomic Energy Commission. The objective of this work is to adapt these low cost radiometers to construct reliable instruments for measuring PAR. The transmittance of optical filters and sensor response have been analyzed for different dielectric materials, number of layers deposited, and incidence angles. Uncertainties in thickness of layer deposition were evaluated. - Highlights: • Design of radiometers to measure photosynthetically active radiation • The study has used a filter and a Si sensor to modify spectral response. • Dielectric multilayers on glass and silicon sensor • Spectral response related to different incidence angles, materials and spectra

  10. A study of brain networks associated with swallowing using graph-theoretical approaches.

    Directory of Open Access Journals (Sweden)

    Bo Luan

    Full Text Available Functional connectivity between brain regions during swallowing tasks is still not well understood. Understanding these complex interactions is of great interest from both a scientific and a clinical perspective. In this study, functional magnetic resonance imaging (fMRI was utilized to study brain functional networks during voluntary saliva swallowing in twenty-two adult healthy subjects (all females, [Formula: see text] years of age. To construct these functional connections, we computed mean partial correlation matrices over ninety brain regions for each participant. Two regions were determined to be functionally connected if their correlation was above a certain threshold. These correlation matrices were then analyzed using graph-theoretical approaches. In particular, we considered several network measures for the whole brain and for swallowing-related brain regions. The results have shown that significant pairwise functional connections were, mostly, either local and intra-hemispheric or symmetrically inter-hemispheric. Furthermore, we showed that all human brain functional network, although varying in some degree, had typical small-world properties as compared to regular networks and random networks. These properties allow information transfer within the network at a relatively high efficiency. Swallowing-related brain regions also had higher values for some of the network measures in comparison to when these measures were calculated for the whole brain. The current results warrant further investigation of graph-theoretical approaches as a potential tool for understanding the neural basis of dysphagia.

  11. Bioactivity of Isoflavones: Assessment through a Theoretical Model as a Way to Obtain a “Theoretical Efficacy Related to Estradiol (TERE)”

    Science.gov (United States)

    Campos, Maria da Graça R.; Matos, Miguel Pires

    2010-01-01

    The increase of human life span will have profound implications in Public Health in decades to come. By 2030, there will be an estimated 1.2 billion women in post-menopause. Hormone Replacement Therapy with synthetic hormones is still full of risks and according to the latest developments, should be used for the shortest time possible. Searching for alternative drugs is inevitable in this scenario and science must provide physicians with other substances that can be used to treat the same symptoms with less side effects. Systematic research carried out on this field of study is focusing now on isoflavones but the randomised controlled trials and reviews of meta-analysis concerning post-menopause therapy, that could have an important impact on human health, are very controversial. The aim of the present work was to establish a theoretical calculation suitable for use as a way to estimate the “Theoretical Efficacy (TE)” of a mixture with different bioactive compounds as a way to obtain a “Theoretical Efficacy Related to Estradiol (TERE)”. The theoretical calculation that we propose in this paper integrates different knowledge about this subject and sets methodological boundaries that can be used to analyse already published data. The outcome should set some consensus for new clinical trials using isoflavones (isolated or included in mixtures) that will be evaluated to assess their therapeutically activity. This theoretical method for evaluation of a possible efficacy could probably also be applied to other herbal drug extracts when a synergistic or contradictory bio-effect is not verified. In this way, it we may contribute to enlighten and to the development of new therapeutic approaches. PMID:20386649

  12. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    Gustavo Miranda da Silva

    2015-09-01

    Full Text Available This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions.

  13. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  14. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  15. Methodological Framework for Estimating the Correlation Dimension in HRV Signals

    Directory of Open Access Journals (Sweden)

    Juan Bolea

    2014-01-01

    Full Text Available This paper presents a methodological framework for robust estimation of the correlation dimension in HRV signals. It includes (i a fast algorithm for on-line computation of correlation sums; (ii log-log curves fitting to a sigmoidal function for robust maximum slope estimation discarding the estimation according to fitting requirements; (iii three different approaches for linear region slope estimation based on latter point; and (iv exponential fitting for robust estimation of saturation level of slope series with increasing embedded dimension to finally obtain the correlation dimension estimate. Each approach for slope estimation leads to a correlation dimension estimate, called D^2, D^2⊥, and D^2max. D^2 and D^2max estimate the theoretical value of correlation dimension for the Lorenz attractor with relative error of 4%, and D^2⊥ with 1%. The three approaches are applied to HRV signals of pregnant women before spinal anesthesia for cesarean delivery in order to identify patients at risk for hypotension. D^2 keeps the 81% of accuracy previously described in the literature while D^2⊥ and D^2max approaches reach 91% of accuracy in the same database.

  16. Machine learning a theoretical approach

    CERN Document Server

    Natarajan, Balas K

    2014-01-01

    This is the first comprehensive introduction to computational learning theory. The author's uniform presentation of fundamental results and their applications offers AI researchers a theoretical perspective on the problems they study. The book presents tools for the analysis of probabilistic models of learning, tools that crisply classify what is and is not efficiently learnable. After a general introduction to Valiant's PAC paradigm and the important notion of the Vapnik-Chervonenkis dimension, the author explores specific topics such as finite automata and neural networks. The presentation

  17. A theoretical model for estimating the vacancies produced in graphene by irradiation

    International Nuclear Information System (INIS)

    Codorniu Pujals, Daniel; Aguilera Corrales, Yuri

    2011-01-01

    The award of the Nobel Prize of Physics 2010 to the scientists that isolated graphene is a clear evidence of the great interest that this system has raised among the physicists. This quasi-two-dimensional material, whose electrons behave as massless Dirac particles, presents sui generis properties that seem very promising for diverse practical applications. At the same time, the system poses new theoretical challenges for the scientists of very different branches, from Material Science to Relativistic Quantum Mechanics. A topic of great actuality in graphene researches is the search of ways to control the number and distribution of the defects in its crystal lattice, in order to achieve certain physical properties. One of these ways can be the irradiation with different kind of particles. However, the irradiation processes in two-dimensional systems have been insufficiently studied. The classic models of interaction of the radiation with solids are based on three-dimensional structures, for what they should be modified to apply them to graphene. In the present work we discuss, from the theoretical point of view, the features of the processes that happen in the two-dimensional structure of monolayer graphene under irradiation with different kinds of particles. In that context, some mathematical expressions that allow to estimate the concentration of the vacancies created during these processes are presented. We also discuss the possible use of the information obtained from the model to design structures of topological defects with certain elastic deformation fields, as well as their influence in the electronic properties. (Author)

  18. Theoretical Estimation of Thermal Effects in Drilling of Woven Carbon Fiber Composite

    Directory of Open Access Journals (Sweden)

    José Díaz-Álvarez

    2014-06-01

    Full Text Available Carbon Fiber Reinforced Polymer (CFRPs composites are extensively used in structural applications due to their attractive properties. Although the components are usually made near net shape, machining processes are needed to achieve dimensional tolerance and assembly requirements. Drilling is a common operation required for further mechanical joining of the components. CFRPs are vulnerable to processing induced damage; mainly delamination, fiber pull-out, and thermal degradation, drilling induced defects being one of the main causes of component rejection during manufacturing processes. Despite the importance of analyzing thermal phenomena involved in the machining of composites, only few authors have focused their attention on this problem, most of them using an experimental approach. The temperature at the workpiece could affect surface quality of the component and its measurement during processing is difficult. The estimation of the amount of heat generated during drilling is important; however, numerical modeling of drilling processes involves a high computational cost. This paper presents a combined approach to thermal analysis of composite drilling, using both an analytical estimation of heat generated during drilling and numerical modeling for heat propagation. Promising results for indirect detection of risk of thermal damage, through the measurement of thrust force and cutting torque, are obtained.

  19. Estimates for the parameters of the heavy quark expansion

    Energy Technology Data Exchange (ETDEWEB)

    Heinonen, Johannes; Mannel, Thomas [Universitaet Siegen (Germany)

    2015-07-01

    We give improved estimates for the non-perturbative parameters appearing in the heavy quark expansion for inclusive decays. While the parameters appearing in low orders of this expansion can be extracted from data, the number of parameters in higher orders proliferates strongly, making a determination of these parameters from data impossible. Thus, one has to rely on theoretical estimates which may be obtained from an insertion of intermediate states. We refine this method and attempt to estimate the uncertainties of this approach.

  20. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  1. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  2. Reflective practice and vocational training: theoretical approaches in the field of Health and Nursing

    Directory of Open Access Journals (Sweden)

    Luciana Netto

    2018-02-01

    Full Text Available Abstract Objective: Theoretical reflection that uses Reflexivity as a theoretical reference and its objective is to approach Donald Schön's reflective thinking, interrelating it with the innovative curriculum. Method: The writings of Schön and other authors who addressed the themes in their works were used. Results: The innovative curriculum as an expression of dissatisfaction with the fragmentation paradigm may favor reflective practice, since it is necessary to mobilize reflexivity for actions and contexts that are unpredictable in the field of health promotion. Conclusions: The innovative curriculum favors and is favored by a reflective practice and the development of competencies for the promotion of health. Implications for practice: The findings apply to the practice of nurses to deal with the conditioning and determinants of the health-disease process.

  3. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    Science.gov (United States)

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  4. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  5. Forming Limits in Sheet Metal Forming for Non-Proportional Loading Conditions - Experimental and Theoretical Approach

    International Nuclear Information System (INIS)

    Ofenheimer, Aldo; Buchmayr, Bruno; Kolleck, Ralf; Merklein, Marion

    2005-01-01

    The influence of strain paths (loading history) on material formability is well known in sheet forming processes. Sophisticated experimental methods are used to determine the entire shape of strain paths of forming limits for aluminum AA6016-T4 alloy. Forming limits for sheet metal in as-received condition as well as for different pre-deformation are presented. A theoretical approach based on Arrieux's intrinsic Forming Limit Stress Curve (FLSC) concept is employed to numerically predict the influence of loading history on forming severity. The detailed experimental strain paths are used in the theoretical study instead of any linear or bilinear simplified loading histories to demonstrate the predictive quality of forming limits in the state of stress

  6. Molecular approach of uranyl/mineral surfaces: theoretical approach

    International Nuclear Information System (INIS)

    Roques, J.

    2009-01-01

    As migration of radio-toxic elements through the geosphere is one of the processes which may affect the safety of a radioactive waste storage site, the author shows that numerical modelling is a support to experimental result exploitation, and allows the development of new interpretation and prediction codes. He shows that molecular modelling can be used to study processes of interaction between an actinide ion (notably a uranyl ion) and a mineral surface (a TiO 2 substrate). He also reports the predictive theoretical study of the interaction between an uranyl ion and a gibbsite substrate

  7. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF PROTEST ACTIVITY IN THE WESTERN SOCIOLOGICAL THOUGHT

    OpenAIRE

    Купрєєва, Ю. О.

    2015-01-01

    In this article the author discusses the main theoretical and methodological approaches to the study of protest activity. Among them - the theory of collective behavior, the relative deprivation theory, the new social movements theory and the resource mobilization theory. Highlighted their strengths and weaknesses. Focused on the new direction of protest studies connected with the development of the Internet.

  8. Estimation of net greenhouse gas balance using crop- and soil-based approaches: Two case studies

    International Nuclear Information System (INIS)

    Huang, Jianxiong; Chen, Yuanquan; Sui, Peng; Gao, Wansheng

    2013-01-01

    The net greenhouse gas balance (NGHGB), estimated by combining direct and indirect greenhouse gas (GHG) emissions, can reveal whether an agricultural system is a sink or source of GHGs. Currently, two types of methods, referred to here as crop-based and soil-based approaches, are widely used to estimate the NGHGB of agricultural systems on annual and seasonal crop timescales. However, the two approaches may produce contradictory results, and few studies have tested which approach is more reliable. In this study, we examined the two approaches using experimental data from an intercropping trial with straw removal and a tillage trial with straw return. The results of the two approaches provided different views of the two trials. In the intercropping trial, NGHGB estimated by the crop-based approach indicated that monocultured maize (M) was a source of GHGs (− 1315 kg CO 2 −eq ha −1 ), whereas maize–soybean intercropping (MS) was a sink (107 kg CO 2 −eq ha −1 ). When estimated by the soil-based approach, both cropping systems were sources (− 3410 for M and − 2638 kg CO 2 −eq ha −1 for MS). In the tillage trial, mouldboard ploughing (MP) and rotary tillage (RT) mitigated GHG emissions by 22,451 and 21,500 kg CO 2 −eq ha −1 , respectively, as estimated by the crop-based approach. However, by the soil-based approach, both tillage methods were sources of GHGs: − 3533 for MP and − 2241 kg CO 2 −eq ha −1 for RT. The crop-based approach calculates a GHG sink on the basis of the returned crop biomass (and other organic matter input) and estimates considerably more GHG mitigation potential than that calculated from the variations in soil organic carbon storage by the soil-based approach. These results indicate that the crop-based approach estimates higher GHG mitigation benefits compared to the soil-based approach and may overestimate the potential of GHG mitigation in agricultural systems. - Highlights: • Net greenhouse gas balance (NGHGB) of

  9. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    Science.gov (United States)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  10. The heat of formation of the acetyl cation: a theoretical evaluation

    Science.gov (United States)

    Smith, Brian J.; Radom, Leo

    1990-12-01

    Ab initio molecular orbital calculations have been used to obtain the heat of formation of the acetyl cation. In one set of calculations, the reverse activation barrier for the production of acetyl cation from acetaldehyde has been shown to be significantly different zero and the value obtained (9.8 kJ mol-1 at 298 K) has been used to correct the [Delta]Hof298 (CH3CO+) value derived from appearance energy measurements. In a second set of calculations, [Delta]H°f298 (CH3CO+) has been obtained from the calculated heats of a number of reactions involving the acetyl cation together with experimental heats of formation for the species involved. The best theoretical estimate for [Delta]H°f298 (CH3CO+), obtained as a mean of results from the two approaches, is 658 kJ mol-1. The best theoretical estimate for [Delta]H°f0(CH3CO+), obtained in a similar manner, is 665 kJ mol-1.

  11. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories

  12. An effectiveness analysis of healthcare systems using a systems theoretic approach

    Directory of Open Access Journals (Sweden)

    Inder Kerry

    2009-10-01

    surveyors is developed that provides a systematic search for improving the impact of accreditation on quality of care and hence on the accreditation/performance correlation. Conclusion There is clear value in developing a theoretical systems approach to achieving quality in health care. The introduction of the systematic surveyor-based search for improvements creates an adaptive-control system to optimize health care quality. It is hoped that these outcomes will stimulate further research in the development of strategic planning using systems theoretic approach for the improvement of quality in health care.

  13. An effectiveness analysis of healthcare systems using a systems theoretic approach.

    Science.gov (United States)

    Chuang, Sheuwen; Inder, Kerry

    2009-10-24

    improving the impact of accreditation on quality of care and hence on the accreditation/performance correlation. There is clear value in developing a theoretical systems approach to achieving quality in health care. The introduction of the systematic surveyor-based search for improvements creates an adaptive-control system to optimize health care quality. It is hoped that these outcomes will stimulate further research in the development of strategic planning using systems theoretic approach for the improvement of quality in health care.

  14. Theoretical estimation of "6"4Cu production with neutrons emitted during "1"8F production with a 30 MeV medical cyclotron

    International Nuclear Information System (INIS)

    Auditore, Lucrezia; Amato, Ernesto; Baldari, Sergio

    2017-01-01

    Purpose: This work presents the theoretical estimation of a combined production of "1"8F and "6"4Cu isotopes for PET applications. "6"4Cu production is induced in a secondary target by neutrons emitted during a routine "1"8F production with a 30 MeV cyclotron: protons are used to produce "1"8F by means of the "1"8O(p,n)"1"8F reaction on a ["1"8O]-H_2O target (primary target) and the emitted neutrons are used to produce "6"4Cu by means of the "6"4Zn(n,p)"6"4Cu reaction on enriched zinc target (secondary target). Methods: Monte Carlo simulations were carried out using Monte Carlo N Particle eXtended (MCNPX) code to evaluate flux and energy spectra of neutrons produced in the primary (Be+["1"8O]-H_2O) target by protons and the attenuation of neutron flux in the secondary target. "6"4Cu yield was estimated using an analytical approach based on both TENDL-2015 data library and experimental data selected from EXFOR database. Results: Theoretical evaluations indicate that about 3.8 MBq/μA of "6"4Cu can be obtained as a secondary, ‘side’ production with a 30 MeV cyclotron, for 2 h of irradiation of a proper designed zinc target. Irradiating for 2 h with a proton current of 120 μA, a yield of about 457 MBq is expected. Moreover, the most relevant contaminants result to be "6"3","6"5Zn, which can be chemically separated from "6"4Cu contrarily to what happens with proton irradiation of an enriched "6"4Ni target, which provides "6"4Cu mixed to other copper isotopes as contaminants. Conclusions: The theoretical study discussed in this paper evaluates the potential of the combined production of "1"8F and "6"4Cu for medical purposes, irradiating a properly designed target with 30 MeV protons. Interesting yields of "6"4Cu are obtainable and the estimation of contaminants in the irradiated zinc target is discussed. - Highlights: • "6"4Cu production with secondary neutrons from "1"8F production with protons was investigated. • Neutron reactions induced in enriched "6"4Zn

  15. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  16. Top-down and bottom-up approaches for cost estimating new reactor designs

    International Nuclear Information System (INIS)

    Berbey, P.; Gautier, G.M.; Duflo, D.; Rouyer, J.L.

    2007-01-01

    For several years, Generation-4 designs will be 'pre-conceptual' for the less mature concepts and 'preliminary' for the more mature concepts. In this situation, appropriate data for some of the plant systems may be lacking to develop a bottom-up cost estimate. Therefore, a more global approach, the Top-Down Approach (TDA), is needed to help the designers and decision makers in comparing design options. It utilizes more or less simple models for cost estimating the different parts of a design. TDA cost estimating effort applies to a whole functional element whose cost is approached by similar estimations coming from existing data, ratios and models, for a given range of variation of parameters. Modeling is used when direct analogy is not possible. There are two types of models, global and specific ones. Global models are applied to cost modules related to Code Of Account. Exponential formulae such as Ci = Ai + (Bi x Pi n ) are used when there are cost data for comparable modules in nuclear or other industries. Specific cost models are developed for major specific components of the plant: - process equipment such as reactor vessel, steam generators or large heat exchangers. - buildings, with formulae estimating the construction cost from base cost of m3 of building volume. - systems, when unit costs, cost ratios and models are used, depending on the level of detail of the design. Bottom Up Approach (BUA), which is based on unit prices coming from similar equipment or from manufacturer consulting, is very valuable and gives better cost estimations than TDA when it can be applied, that is at a rather late stage of the design. Both approaches are complementary when some parts of the design are detailed enough to be estimated by BUA, and when BUA results are used to check TDA results and to improve TDA models. This methodology is applied to the HTR (High Temperature Reactor) concept and to an advanced PWR design

  17. Theoretical nuclear physics

    CERN Document Server

    Blatt, John M

    1979-01-01

    A classic work by two leading physicists and scientific educators endures as an uncommonly clear and cogent investigation and correlation of key aspects of theoretical nuclear physics. It is probably the most widely adopted book on the subject. The authors approach the subject as ""the theoretical concepts, methods, and considerations which have been devised in order to interpret the experimental material and to advance our ability to predict and control nuclear phenomena.""The present volume does not pretend to cover all aspects of theoretical nuclear physics. Its coverage is restricted to

  18. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  19. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  20. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  1. A theoretical framework for Ångström equation. Its virtues and liabilities in solar energy estimation

    International Nuclear Information System (INIS)

    Stefu, Nicoleta; Paulescu, Marius; Blaga, Robert; Calinoiu, Delia; Pop, Nicolina; Boata, Remus; Paulescu, Eugenia

    2016-01-01

    Highlights: • A self-consistent derivation of the Ångström equation is carried out. • The theoretical assessment on its performance is well supported by the measured data. • The variability in cloud transmittance is a major source of uncertainty for estimates. • The degradation in time and space of the empirical equations calibration is assessed. - Abstract: The relation between solar irradiation and sunshine duration was investigated from the very beginning of solar radiation measurements. Many studies were devoted to this topic aiming to include the complex influence of clouds on solar irradiation into equations. This study is focused on the linear relationship between the clear sky index and the relative sunshine proposed by the pioneering work of Ångström. A full semi-empirical derivation of the equation, highlighting its virtues and liabilities, is presented. Specific Ångström – type equations for beam and diffuse solar irradiation were derived separately. The sum of the two components recovers the traditional form of the Ångström equation. The physical meaning of the Ångström parameter, as the average of the clouds transmittance, emerges naturally. The theoretical results on the Ångström equation performance are well supported by the tests against measured data. Using long-term records of global solar irradiation and sunshine duration from thirteen European radiometric stations, the influence of the Ångström constraint (slope equals one minus intercept) on the accuracy of the estimates is analyzed. Another focus is on the assessment of the degradation of the equation calibration. The temporal variability in cloud transmittance (both long-term trend and fluctuations) is a major source of uncertainty for Ångström equation estimates.

  2. Optimization of Investment Planning Based on Game-Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  3. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  4. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    Science.gov (United States)

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  5. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    International Nuclear Information System (INIS)

    Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-01-01

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation

  6. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  7. Two Approaches for Estimating Discharge on Ungauged Basins in Oregon, USA

    Science.gov (United States)

    Wigington, P. J.; Leibowitz, S. G.; Comeleo, R. L.; Ebersole, J. L.; Copeland, E. A.

    2009-12-01

    Detailed information on the hydrologic behavior of streams is available for only a small proportion of all streams. Even in cases where discharge has been monitored, these measurements may not be available for a sufficiently long period to characterize the full behavior of a stream. In this presentation, we discuss two separate approaches for predicting discharge at ungauged locations. The first approach models discharge in the Calapooia Watershed, Oregon based on long-term US Geological Survey gauge stations located in two adjacent watersheds. Since late 2008, we have measured discharge and water level over a range of flow conditions at more than a dozen sites within the Calapooia. Initial results indicate that many of these sites, including the mainstem Calapooia and some of its tributaries, can be predicted by these outside gauge stations and simple landscape factors. This is not a true “ungauged” approach, since measurements are required to characterize the range of flow. However, the approach demonstrates how such measurements and more complete data from similar areas can be used to estimate a detailed record for a longer period. The second approach estimates 30 year average monthly discharge at ungauged locations based on a Hydrologic Landscape Region (HLR) model. We mapped HLR class over the entire state of Oregon using an assessment unit with an average size of 44 km2. We then calculated average statewide moisture surplus values for each HLR class, modified to account for snowpack accumulation and snowmelt. We calculated potential discharge by summing these values for each HLR within a watershed. The resulting monthly hydrograph is then transformed to estimate monthly discharge, based on aquifer and soil permeability and terrain. We hypothesize that these monthly values should provide good estimates of discharge in areas where imports from or exports to the deep groundwater system are not significant. We test the approach by comparing results with

  8. A game-theoretical approach for reciprocal security-related prevention investment decisions

    International Nuclear Information System (INIS)

    Reniers, Genserik; Soudan, Karel

    2010-01-01

    Every company situated within a chemical cluster faces important security risks from neighbouring companies. Investing in reciprocal security preventive measures is therefore necessary to avoid major accidents. These investments do not, however, provide a direct return on investment for the investor-company and thus plants are hesitative to invest. Moreover, there is likelihood that even if a company has fully invested in reciprocal security prevention, its neighbour has not, and as a result the company can experience a major accident caused by an initial (minor or major) accident that occurred in an adjacent chemical enterprise. In this article we employ a game-theoretic approach to interpret and model behaviour of two neighbouring chemical plants while negotiating and deciding on reciprocal security prevention investments.

  9. Theoretical estimation of Z´ boson mass

    International Nuclear Information System (INIS)

    Maji, Priya; Banerjee, Debika; Sahoo, Sukadev

    2016-01-01

    The discovery of Higgs boson at the LHC brings a renewed perspective in particle physics. With the help of Higgs mechanism, standard model (SM) allows the generation of particle mass. The ATLAS and CMS experiments at the LHC have predicted the mass of Higgs boson as m_H=125-126 GeV. Recently, it is claimed that the Higgs boson might interact with dark matter and there exists relation between the Higgs boson and dark matter (DM). Hertzberg has predicted a correlation between the Higgs mass and the abundance of dark matter. His theoretical result is in good agreement with current data. He has predicted the mass of Higgs boson as GeV. The Higgs boson could be coupled to the particle that constitutes all or part of the dark matter in the universe. Light Z´ boson could have important implications in dark matter phenomenology

  10. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  11. Single-snapshot DOA estimation by using Compressed Sensing

    Science.gov (United States)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  12. Theoretical and empirical approaches to using films as a means to increase communication efficiency.

    Directory of Open Access Journals (Sweden)

    Kiselnikova, N.V.

    2016-07-01

    Full Text Available The theoretical framework of this analytic study is based on studies in the field of film perception. Films are considered as a communicative system that is encrypted in an ordered series of shots, and decoding proceeds during perception. The shots are the elements of a cinematic message that must be “read” by viewer. The objective of this work is to analyze the existing theoretical approaches to using films in psychotherapy and education. An original approach to film therapy that is based on teaching clients to use new communicative sets and psychotherapeutic patterns through watching films is presented. The article specifies the main emphasized points in theories of film therapy and education. It considers the specifics of film therapy in the process of increasing the effectiveness of communication. It discusses the advantages and limitations of the proposed method. The contemporary forms of film therapy and the formats of cinema clubs are criticized. The theoretical assumptions and empirical research that could be used as a basis for a method of developing effective communication by means of films are discussed. Our studies demonstrate that the usage of film therapy must include an educational stage for more effective and stable results. This means teaching viewers how to recognize certain psychotherapeutic and communicative patterns in the material of films, to practice the skill of finding as many examples as possible for each pattern and to transfer the acquired schemes of analyzing and recognizing patterns into one’s own life circumstances. The four stages of the film therapeutic process as well as the effects that are achieved at each stage are described in detail. In conclusion, the conditions under which the usage of the film therapy method would be the most effective are observed. Various properties of client groups and psychotherapeutic scenarios for using the method of active film therapy are described.

  13. Beyond the Cognitive and the Virtue Approaches to Moral Education: Some Theoretical Foundations for an Integrated Account of Moral Education.

    Science.gov (United States)

    Roh, Young-Ran

    2000-01-01

    Explores theoretical foundation for integrated approach to moral education; discusses rational choice and moral action within human reflective structure; investigates moral values required for integrative approach to moral education; discusses content of moral motivation, including role of emotion and reason. (Contains 15 references.) (PKP)

  14. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    Science.gov (United States)

    Nichols, James D.; Hines, James E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  15. On Algebraic Approach for MSD Parametric Estimation

    OpenAIRE

    Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

    2011-01-01

    This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

  16. A Theoretical Explanation of Marital Conflicts by Paradigmatic Approach

    Directory of Open Access Journals (Sweden)

    اسماعیل جهانی دولت آباد

    2017-06-01

    Full Text Available Due to the economic, social and cultural changes in recent decades and consequently alterations in the form and duties of families and expectations of individuals from marriage, the institution of the family and marriage are enormously involved with different challenges and conflicts in comparison to past years. Fragile marital relationships, conflicts and divorce are results of such situations in Iran. Accordingly, the present study, which is designed through meta-analysis and deduction based on the concept analysis and reconceptualization of recent studies, has committed to manifest a proper different paradigm to explain marital conflicts. This paradigm is relying on various theoretical approaches, particularly the theory of symbolic interactionism as the main explanatory mean, and also applying the concept of “Marital Paradigm” as the missing information in previous studies of this field. It explains the marital conflicts between couples as paradigmatic conflicts; and its main idea is that marital conflict is not the result of one or more fixed and specified factors, but it is the production of encountering the opposing (or different paradigms.

  17. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  18. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  19. Theoretical relation between halo current-plasma energy displacement/deformation in EAST

    Science.gov (United States)

    Khan, Shahab Ud-Din; Khan, Salah Ud-Din; Song, Yuntao; Dalong, Chen

    2018-04-01

    In this paper, theoretical model for calculating halo current has been developed. This work attained novelty as no theoretical calculations for halo current has been reported so far. This is the first time to use theoretical approach. The research started by calculating points for plasma energy in terms of poloidal and toroidal magnetic field orientations. While calculating these points, it was extended to calculate halo current and to developed theoretical model. Two cases were considered for analyzing the plasma energy when flows down/upward to the diverter. Poloidal as well as toroidal movement of plasma energy was investigated and mathematical formulations were designed as well. Two conducting points with respect to (R, Z) were calculated for halo current calculations and derivations. However, at first, halo current was established on the outer plate in clockwise direction. The maximum generation of halo current was estimated to be about 0.4 times of the plasma current. A Matlab program has been developed to calculate halo current and plasma energy calculation points. The main objective of the research was to establish theoretical relation with experimental results so as to precautionary evaluate the plasma behavior in any Tokamak.

  20. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  1. A game theoretic approach to assignment problems

    NARCIS (Netherlands)

    Klijn, F.

    2000-01-01

    Game theory deals with the mathematical modeling and analysis of conflict and cooperation in the interaction of multiple decision makers. This thesis adopts two game theoretic methods to analyze a range of assignment problems that arise in various economic situations. The first method has as

  2. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  3. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  4. Determination of pKa and the corresponding structures of quinclorac using combined experimental and theoretical approaches

    Science.gov (United States)

    Song, Dean; Sun, Huiqing; Jiang, Xiaohua; Kong, Fanyu; Qiang, Zhimin; Zhang, Aiqian; Liu, Huijuan; Qu, Jiuhui

    2018-01-01

    As an emerging environmental contaminant, the herbicide quinclorac has attracted much attention in recent years. However, a very fundamental issue, the acid dissociation of quinclorac has not yet to be studied in detail. Herein, the pKa value and the corresponding structures of quinclorac were systematically investigated using combined experimental and theoretical approaches. The experimental pKa of quinclorac was determined by the spectrophotometric method to be 2.65 at 25 °C with ionic strength of 0.05 M, and was corrected to be 2.56 at ionic strength of zero. The molecular structures of quinclorac were then located by employing the DFT calculation. The anionic quinclorac was directly located with the carboxylic group perpendicular to the aromatic ring, while neutral quinclorac was found to be the equivalent twin structures. The result was further confirmed by analyzing the UV/Vis and MS-MS2 spectra from both experimental and theoretical viewpoints. By employing the QSPR approach, the theoretical pKa of QCR was determined to be 2.50, which is excellent agreement with the experimental result obtained herein. The protonation of QCR at the carboxylic group instead of the quinoline structure was attributed to the weak electronegative property of nitrogen atom induced by the electron-withdrawing groups. It is anticipated that this work could not only help in gaining a deep insight into the acid dissociation of quinclorac but also offering the key information on its reaction and interaction with others.

  5. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  6. Fourier-Malliavin volatility estimation theory and practice

    CERN Document Server

    Mancino, Maria Elvira; Sanfelici, Simona

    2017-01-01

    This volume is a user-friendly presentation of the main theoretical properties of the Fourier-Malliavin volatility estimation, allowing the readers to experience the potential of the approach and its application in various financial settings. Readers are given examples and instruments to implement this methodology in various financial settings and applications of real-life data. A detailed bibliographic reference is included to permit an in-depth study. .

  7. Hybrid rocket engine, theoretical model and experiment

    Science.gov (United States)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  8. A THEORETICAL APPROACH TO THE TRANSITION FROM A RESOURCE BASED TO A KNOWLEDGE-ECONOMY

    Directory of Open Access Journals (Sweden)

    Diana GIOACASI

    2015-09-01

    Full Text Available Economic development and the emergence of new technologies have changed the optics on the factors that are generating added value. The transition from a resource-dependent economy to one focused on tangible non-financial factors has progressed in a gradual manner and took place under the influence of globalization and of the internet boom. The aim of this article is to provide a theoretical approach to this phenomenon from the perspective of the temporal evolution of enterprise resources.

  9. A Balanced Theoretical and Empirical Approach for the Development of a Design Support Tool

    DEFF Research Database (Denmark)

    Jensen, Thomas Aakjær; Hansen, Claus Thorp

    1996-01-01

    The introduction of a new design support system may change the engineering designer's work situation. Therefore, it may not be possible to derive all the functionalities for a design support system from solely empirical studies of manual design work. Alternatively the design support system could ...... system, indicating a proposal for how to balance a theoretical and empirical approach. The result of this research will be utilized in the development of a Designer's Workbench to support the synthesis activity in mechanical design....

  10. Approaches to estimating the universe of natural history collections data

    Directory of Open Access Journals (Sweden)

    Arturo H. Ariño

    2010-10-01

    Full Text Available This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units might remain to be mobilized. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

  11. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  12. An Information-Theoretic Approach for Indirect Train Traffic Monitoring Using Building Vibration

    Directory of Open Access Journals (Sweden)

    Susu Xu

    2017-05-01

    Full Text Available This paper introduces an indirect train traffic monitoring method to detect and infer real-time train events based on the vibration response of a nearby building. Monitoring and characterizing traffic events are important for cities to improve the efficiency of transportation systems (e.g., train passing, heavy trucks, and traffic. Most prior work falls into two categories: (1 methods that require intensive labor to manually record events or (2 systems that require deployment of dedicated sensors. These approaches are difficult and costly to execute and maintain. In addition, most prior work uses dedicated sensors designed for a single purpose, resulting in deployment of multiple sensor systems. This further increases costs. Meanwhile, with the increasing demands of structural health monitoring, many vibration sensors are being deployed in commercial buildings. Traffic events create ground vibration that propagates to nearby building structures inducing noisy vibration responses. We present an information-theoretic method for train event monitoring using commonly existing vibration sensors deployed for building health monitoring. The key idea is to represent the wave propagation in a building induced by train traffic as information conveyed in noisy measurement signals. Our technique first uses wavelet analysis to detect train events. Then, by analyzing information exchange patterns of building vibration signals, we infer the category of the events (i.e., southbound or northbound train. Our algorithm is evaluated with an 11-story building where trains pass by frequently. The results show that the method can robustly achieve a train event detection accuracy of up to a 93% true positive rate and an 80% true negative rate. For direction categorization, compared with the traditional signal processing method, our information-theoretic approach reduces categorization error from 32.1 to 12.1%, which is a 2.5× improvement.

  13. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

    Science.gov (United States)

    Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

    2017-12-01

    We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

  14. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  15. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  16. Strategic exploration of battery waste management: A game-theoretic approach.

    Science.gov (United States)

    Kaushal, Rajendra Kumar; Nema, Arvind K; Chaudhary, Jyoti

    2015-07-01

    Electronic waste or e-waste is the fastest growing stream of solid waste today. It contains both toxic substances as well as valuable resources. The present study uses a non-cooperative game-theoretic approach for efficient management of e-waste, particularly batteries that contribute a major portion of any e-waste stream and further analyses the economic consequences of recycling of these obsolete, discarded batteries. Results suggest that the recycler would prefer to collect the obsolete batteries directly from the consumer rather than from the manufacturer, only if, the incentive return to the consumer is less than 33.92% of the price of the battery, the recycling fee is less than 6.46% of the price of the battery, and the price of the recycled material is more than 31.08% of the price of the battery. The manufacturer's preferred choice of charging a green tax from the consumer can be fruitful for the battery recycling chain. © The Author(s) 2015.

  17. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    Science.gov (United States)

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  18. An evolutionary approach to real-time moment magnitude estimation via inversion of displacement spectra

    Science.gov (United States)

    Caprio, M.; Lancieri, M.; Cua, G. B.; Zollo, A.; Wiemer, S.

    2011-01-01

    We present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. The method consists of two components: 1) estimating seismic moment by finding the low frequency plateau Ω0, the corner frequency fc and attenuation factor (Q) that best fit the observed displacement spectra assuming a Brune ω2 model, and 2) estimating magnitude and its uncertainty based on the estimate of seismic moment. A novel characteristic of this method is that is does not rely on empirically derived relationships, but rather involves direct estimation of quantities related to the moment magnitude. SI magnitude and uncertainty estimates are updated each second following the initial P detection. We tested the SI approach on broadband and strong motion waveforms data from 158 Southern California events, and 25 Japanese events for a combined magnitude range of 3 ≤ M ≤ 7. Based on the performance evaluated on this dataset, the SI approach can potentially provide stable estimates of magnitude within 10 seconds from the initial earthquake detection.

  19. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  20. Theoretical study on the inverse modeling of deep body temperature measurement

    International Nuclear Information System (INIS)

    Huang, Ming; Chen, Wenxi

    2012-01-01

    We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation. (paper)

  1. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research

  2. A Theoretical Framework for Soft-Information-Based Synchronization in Iterative (Turbo Receivers

    Directory of Open Access Journals (Sweden)

    Lottici Vincenzo

    2005-01-01

    Full Text Available This contribution considers turbo synchronization, that is to say, the use of soft data information to estimate parameters like carrier phase, frequency, or timing offsets of a modulated signal within an iterative data demodulator. In turbo synchronization, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide a reliable estimate of some signal parameters. The aim of our paper is to show that such "turbo-estimation" approach can be regarded as a special case of the expectation-maximization (EM algorithm. This leads to a general theoretical framework for turbo synchronization that allows to derive parameter estimation procedures for carrier phase and frequency offset, as well as for timing offset and signal amplitude. The proposed mathematical framework is illustrated by simulation results reported for the particular case of carrier phase and frequency offsets estimation of a turbo-coded 16-QAM signal.

  3. WOMEN, FOOTBALL AND EUROPEAN INTEGRATION. AIMS AND QUESTIONS, METHODOLOGICAL AND THEORETICAL APPROACHES

    Directory of Open Access Journals (Sweden)

    Gertrud Pfister

    2013-12-01

    Full Text Available The aim of this article is to introduce a new research topic and provide information about a European research project focusing on football as a means of European integration. Using the results of available studies of the author and other scholars, it is to be discussed whether and how women can participate in football cultures and contribute to a European identity. Based on theoretical approaches to national identity, gender and socialization, as well as and on the analysis of various intersections between gender, football and fandom, it can be concluded that women are still outsiders in the world of football and that it is doubtful whether female players and fans will contribute decisively to Europeanization processes.

  4. ANN Based Approach for Estimation of Construction Costs of Sports Fields

    Directory of Open Access Journals (Sweden)

    Michał Juszczyk

    2018-01-01

    Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.

  5. Comparison of theoretical estimates and experimental measurements of fatigue crack growth under severe thermal shock conditions (part two - theoretical assessment and comparison with experiment)

    International Nuclear Information System (INIS)

    Green, D.; Marsh, D.; Parker, R.

    1984-01-01

    This paper reports the theoretical assessment of cracking which may occur when a severe cycle comprising alternate upshocks and downshocks is applied to an axisymmetric feature with an internal, partial penetration weld and crevice. The experimental observations of cracking are reported separately. A good agreement was noted even though extensive cycle plasticity occurred at the location of cracking. It is concluded that the LEFM solution has correlated with the experiment mainly because of the axisymmetric geometry which allows a large hydrostatic stress to exist at the internal weld crevice end. Thus the stress at the crevice can approach the singular solution required for LEFM correlations without contributing to yielding

  6. Estimating construction and demolition debris generation using a materials flow analysis approach.

    Science.gov (United States)

    Cochran, K M; Townsend, T G

    2010-11-01

    The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Pushing for the Extreme: Estimation of Poisson Distribution from Low Count Unreplicated Data—How Close Can We Get?

    Directory of Open Access Journals (Sweden)

    Peter Tiňo

    2013-04-01

    Full Text Available Studies of learning algorithms typically concentrate on situations where potentially ever growing training sample is available. Yet, there can be situations (e.g., detection of differentially expressed genes on unreplicated data or estimation of time delay in non-stationary gravitationally lensed photon streams where only extremely small samples can be used in order to perform an inference. On unreplicated data, the inference has to be performed on the smallest sample possible—sample of size 1. We study whether anything useful can be learnt in such extreme situations by concentrating on a Bayesian approach that can account for possible prior information on expected counts. We perform a detailed information theoretic study of such Bayesian estimation and quantify the effect of Bayesian averaging on its first two moments. Finally, to analyze potential benefits of the Bayesian approach, we also consider Maximum Likelihood (ML estimation as a baseline approach. We show both theoretically and empirically that the Bayesian model averaging can be potentially beneficial.

  8. Technical Note: A comparison of two empirical approaches to estimate in-stream net nutrient uptake

    Science.gov (United States)

    von Schiller, D.; Bernal, S.; Martí, E.

    2011-04-01

    To establish the relevance of in-stream processes on nutrient export at catchment scale it is important to accurately estimate whole-reach net nutrient uptake rates that consider both uptake and release processes. Two empirical approaches have been used in the literature to estimate these rates: (a) the mass balance approach, which considers changes in ambient nutrient loads corrected by groundwater inputs between two stream locations separated by a certain distance, and (b) the spiralling approach, which is based on the patterns of longitudinal variation in ambient nutrient concentrations along a reach following the nutrient spiralling concept. In this study, we compared the estimates of in-stream net nutrient uptake rates of nitrate (NO3) and ammonium (NH4) and the associated uncertainty obtained with these two approaches at different ambient conditions using a data set of monthly samplings in two contrasting stream reaches during two hydrological years. Overall, the rates calculated with the mass balance approach tended to be higher than those calculated with the spiralling approach only at high ambient nitrogen (N) concentrations. Uncertainty associated with these estimates also differed between both approaches, especially for NH4 due to the general lack of significant longitudinal patterns in concentration. The advantages and disadvantages of each of the approaches are discussed.

  9. Alternative sources of power generation, incentives and regulatory mandates: a theoretical approach to the Colombian case

    International Nuclear Information System (INIS)

    Zapata, Carlos M; Zuluaga Monica M; Dyner, Isaac

    2005-01-01

    Alternative Energy Generation Sources are turning relevant in several countries worldwide because of technology improvement and the environmental treatment. In this paper, the most common problems of renewable energy sources are accomplished, different incentives and regulatory mandates from several countries are exposed, and a first theoretical approach to a renewable energies incentive system in Colombia is discussed. The paper is fundamentally in theoretical aspects and international experience in renewable energies incentives to accelerate their diffusion; features are analyzed towards a special incentive system for renewable energies in Colombia. As a conclusion, in Colombia will be apply indirect incentives like low interest rate, taxes exemptions and so on. But these incentives are applied to limit the support of electricity productivity in generating organizations.

  10. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  11. COST AND TIME ESTIMATES DURING THE SUPPLIER SELECTION OF AN INFORMATION SYSTEM FOR LEGAL AREA: A CASE STUDY COMPARING TRADITIONAL AND AGILE PROJECT APPROACHES

    Directory of Open Access Journals (Sweden)

    Vieira, G. L. S.

    2017-06-01

    Full Text Available Considering a direct correlation between projects requirements details levels and their performance, this paper aims to evaluate whether the adoption of more extensive and detailed cost, time and scope estimation processes based on both practices, traditional and agile, and executed concurrently with the supplier selection stage, could guarantee greater accuracy in these estimates, thus increasing project success rates. Based on a case study for the information system project implementation into the legal area of a large Brazilian company, five suppliers had their proposals analyzed and compared in terms of the costs and deadlines involved, as well as the project management processes used in theirs estimates. From the obtained results, it was possible to observe that not all companies follow, at least during the prospecting phase, their service proposals described management processes, according to the theory. Another important finding was that the proposals involving, at least partially, agile approach concepts, were more likely to justify their estimates. These proposals still presented lower values, whenever compared to those less adherents to the theoretical concepts, as those based on traditional concepts.

  12. Intelligent cognitive radio jamming - a game-theoretical approach

    Science.gov (United States)

    Dabcevic, Kresimir; Betancourt, Alejandro; Marcenaro, Lucio; Regazzoni, Carlo S.

    2014-12-01

    Cognitive radio (CR) promises to be a solution for the spectrum underutilization problems. However, security issues pertaining to cognitive radio technology are still an understudied topic. One of the prevailing such issues are intelligent radio frequency (RF) jamming attacks, where adversaries are able to exploit on-the-fly reconfigurability potentials and learning mechanisms of cognitive radios in order to devise and deploy advanced jamming tactics. In this paper, we use a game-theoretical approach to analyze jamming/anti-jamming behavior between cognitive radio systems. A non-zero-sum game with incomplete information on an opponent's strategy and payoff is modelled as an extension of Markov decision process (MDP). Learning algorithms based on adaptive payoff play and fictitious play are considered. A combination of frequency hopping and power alteration is deployed as an anti-jamming scheme. A real-life software-defined radio (SDR) platform is used in order to perform measurements useful for quantifying the jamming impacts, as well as to infer relevant hardware-related properties. Results of these measurements are then used as parameters for the modelled jamming/anti-jamming game and are compared to the Nash equilibrium of the game. Simulation results indicate, among other, the benefit provided to the jammer when it is employed with the spectrum sensing algorithm in proactive frequency hopping and power alteration schemes.

  13. THEORETICAL AND METHODOLOGICAL APPROACHES TO THE STUDY OF THE IMPACT OF INFORMATION TECHNOLOGY ON SOCIAL CONNECTIONS AMONG YOUTH

    Directory of Open Access Journals (Sweden)

    Sofia Alexandrovna Zverkova

    2015-11-01

    Full Text Available The urgency is due to the virtualization of communication in modern society, especially among young people, affecting social relations and social support services. Stressed the need for a more in-depth study of network virtualization of social relations of society, due to the ambiguous consequences of this phenomenon among the youth.Purpose. Analyze classic and contemporary theoretical and methodological approaches to the study of social ties and social support in terms of technological progress.Results. The article presents a sociological analysis of theoretical and methodological approaches to the study of problems of interaction and social support among youth through strong and weak social ties in cyberspace and in the real world.Practical implications. The analysis gives the opportunity for a wide range of examining social relations in various fields of sociology, such as sociology of youth, sociology of communications.

  14. Developing TheoreticalMethodological Approaches to Assessment of Export Potential of Ukrainian Enterprises

    Directory of Open Access Journals (Sweden)

    Matyushenko Igor Yu.

    2016-02-01

    Full Text Available The article is aimed at studying the existing theoretical-methodological approaches to the analysis and assessment of export potential. The opinions by scientists regarding the disclosure of the categorial content of the concept of «export potential» have been considered, an own definition of the indicated economic category has been suggested. The main types of analytical procedures for assessment have been classified, some authorial methodical approaches to determine the level of export potential have been analyzed. The export potential of a hypothetical enterprise has been calculated by the selected methodologies of assessment. The urgency of improving and refining existing methods to implement more detailed and quantitative analysis has been substantiated. It has been suggested to implement a prognosis assessment of export potential of enterprises by combining the results of several methodologies in the aggregate indicator of export potential efficiency. A prognosis model for the dynamics of export potential of a hypothetical enterprise has been built, value of the aggregate indicator has been calculated on the basis of three selected valuation methodologies.

  15. Information theoretic analysis of canny edge detection in visual communication

    Science.gov (United States)

    Jiang, Bo; Rahman, Zia-ur

    2011-06-01

    In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.

  16. A Theoretical Assessment of the Formation of IT clusters in Kazakhstan: Approaches and Positive Effects

    OpenAIRE

    Anel A. Kireyeva

    2016-01-01

    Abstract The aim of this research is to develop new theoretical approaches of the formation of IT clusters in order to strengthen of trend of the innovative industrialization and competitiveness of the country. Keeping with the previous literature, this study determines by the novelty of the problem, concerning the formation of IT clusters, which can become a driving force of transformation due to the interaction, improving efficiency and introducing advanced technology. In this research,...

  17. Approaches and methods for econometric analysis of market power

    DEFF Research Database (Denmark)

    Perekhozhuk, Oleksandr; Glauben, Thomas; Grings, Michael

    2017-01-01

    , functional forms, estimation methods and derived estimates of the degree of market power. Thereafter, we use our framework to evaluate several structural models based on PTA and GIM to measure oligopsony power in the Ukrainian dairy industry. The PTA-based results suggest that the estimated parameters......This study discusses two widely used approaches in the New Empirical Industrial Organization (NEIO) literature and examines the strengths and weaknesses of the Production-Theoretic Approach (PTA) and the General Identification Method (GIM) for the econometric analysis of market power...... in agricultural and food markets. We provide a framework that may help researchers to evaluate and improve structural models of market power. Starting with the specification of the approaches in question, we compare published empirical studies of market power with respect to the choice of the applied approach...

  18. Best estimate LB LOCA approach based on advanced thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Sauvage, J.Y.; Gandrille, J.L.; Gaurrand, M.; Rochwerger, D.; Thibaudeau, J.; Viloteau, E.

    2004-01-01

    Improvements achieved in thermal-hydraulics with development of Best Estimate computer codes, have led number of Safety Authorities to preconize realistic analyses instead of conservative calculations. The potentiality of a Best Estimate approach for the analysis of LOCAs urged FRAMATOME to early enter into the development with CEA and EDF of the 2nd generation code CATHARE, then of a LBLOCA BE methodology with BWNT following the Code Scaling Applicability and Uncertainty (CSAU) proceeding. CATHARE and TRAC are the basic tools for LOCA studies which will be performed by FRAMATOME according to either a deterministic better estimate (dbe) methodology or a Statistical Best Estimate (SBE) methodology. (author)

  19. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  20. Estimating absolute configurational entropies of macromolecules: the minimally coupled subspace approach.

    Directory of Open Access Journals (Sweden)

    Ulf Hensen

    Full Text Available We develop a general minimally coupled subspace approach (MCSA to compute absolute entropies of macromolecules, such as proteins, from computer generated canonical ensembles. Our approach overcomes limitations of current estimates such as the quasi-harmonic approximation which neglects non-linear and higher-order correlations as well as multi-minima characteristics of protein energy landscapes. Here, Full Correlation Analysis, adaptive kernel density estimation, and mutual information expansions are combined and high accuracy is demonstrated for a number of test systems ranging from alkanes to a 14 residue peptide. We further computed the configurational entropy for the full 67-residue cofactor of the TATA box binding protein illustrating that MCSA yields improved results also for large macromolecular systems.

  1. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  2. A Theoretical Approach to Understanding Population Dynamics with Seasonal Developmental Durations

    Science.gov (United States)

    Lou, Yijun; Zhao, Xiao-Qiang

    2017-04-01

    There is a growing body of biological investigations to understand impacts of seasonally changing environmental conditions on population dynamics in various research fields such as single population growth and disease transmission. On the other side, understanding the population dynamics subject to seasonally changing weather conditions plays a fundamental role in predicting the trends of population patterns and disease transmission risks under the scenarios of climate change. With the host-macroparasite interaction as a motivating example, we propose a synthesized approach for investigating the population dynamics subject to seasonal environmental variations from theoretical point of view, where the model development, basic reproduction ratio formulation and computation, and rigorous mathematical analysis are involved. The resultant model with periodic delay presents a novel term related to the rate of change of the developmental duration, bringing new challenges to dynamics analysis. By investigating a periodic semiflow on a suitably chosen phase space, the global dynamics of a threshold type is established: all solutions either go to zero when basic reproduction ratio is less than one, or stabilize at a positive periodic state when the reproduction ratio is greater than one. The synthesized approach developed here is applicable to broader contexts of investigating biological systems with seasonal developmental durations.

  3. Lateral Load Capacity of Piles: A Comparative Study Between Indian Standards and Theoretical Approach

    Science.gov (United States)

    Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.

    2018-05-01

    As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.

  4. Theoretical aspects of an electrostatic aerosol filter for civilian turbofan engines

    Directory of Open Access Journals (Sweden)

    Valeriu DRAGAN

    2012-03-01

    Full Text Available The paper addresses the problem of aerosol filtration in turbofan engines. The current problem of very fine aerosol admission is the impossibility for mechanical filtration; another aspect of the problem is the high mass flow of air to be filtered. Non-attended, the aerosol admission can -and usually does- lead to clogging of turbine cooling passages and can damage the engine completely. The approach is theoretical and relies on the principles of electrostatic dust collectors known in other industries. An estimative equation is deduced in order to quantify the electrical charge required to obtain the desired filtration. Although the device still needs more theoretical and experimental work, it could one day be used as a means of increasing the safety of airplanes passing trough an aerosol laden mass of air.

  5. Theoretical epidemiology applied to health physics: estimation of the risk of radiation-induced breast cancer

    International Nuclear Information System (INIS)

    Sutherland, J.V.

    1983-01-01

    Indirect estimation of low-dose radiation hazards is possible using the multihit model of carcinogenesis. This model is based on cancer incidence data collected over many decades on tens of millions of people. Available data on human radiation effects can be introduced into the modeling process without the requirement that these data precisely define the model to be used. This reduction in the information demanded from the limited data on human radiation effects allows a more rational approach to estimation of low-dose radiation hazards and helps to focus attention on research directed towards understanding the process of carcinogenesis, rather than on repeating human or animal experiments that cannot provide sufficient data to resolve the low-dose estimation problem. Assessment of the risk of radiation-induced breast cancer provides an excellent example of the utility of multihit modeling procedures

  6. ON THE APPLICATION OF PARTIAL BARRIERS FOR SPINNING MACHINE NOISE CONTROL: A THEORETICAL AND EXPERIMENTAL APPROACH

    Directory of Open Access Journals (Sweden)

    M. R. Monazzam, A. Nezafat

    2007-04-01

    Full Text Available Noise is one of the most serious challenges in modern community. In some specific industries, according to the nature of process, this challenge is more threatening. This paper describes a means of noise control for spinning machine based on experimental measurements. Also advantages and disadvantages of the control procedure are added. Different factors which may affect the performance of the barrier in this situation are also mentioned. To provide a good estimation of the control measure, a theoretical formula is also described and it is compared with the field data. Good agreement between the results of filed measurements and theoretical presented model was achieved. No obvious noise reduction was seen by partial indoor barriers in low absorbent enclosed spaces, since the reflection from multiple hard surfaces is the main dominated factor in the tested environment. At the end, the situation of the environment and standards, which are necessary in attaining the ideal results, are explained.

  7. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    Science.gov (United States)

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  8. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    Science.gov (United States)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  9. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  10. Game-theoretic interference coordination approaches for dynamic spectrum access

    CERN Document Server

    Xu, Yuhua

    2016-01-01

    Written by experts in the field, this book is based on recent research findings in dynamic spectrum access for cognitive radio networks. It establishes a game-theoretic framework and presents cutting-edge technologies for distributed interference coordination. With game-theoretic formulation and the designed distributed learning algorithms, it provides insights into the interactions between multiple decision-makers and the converging stable states. Researchers, scientists and engineers in the field of cognitive radio networks will benefit from the book, which provides valuable information, useful methods and practical algorithms for use in emerging 5G wireless communication.

  11. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  12. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    Science.gov (United States)

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  13. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  14. Estimating productivity costs using the friction cost approach in practice: a systematic review.

    Science.gov (United States)

    Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna

    2016-01-01

    The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.

  15. A catalytic approach to estimate the redox potential of heme-peroxidases

    International Nuclear Information System (INIS)

    Ayala, Marcela; Roman, Rosa; Vazquez-Duhalt, Rafael

    2007-01-01

    The redox potential of heme-peroxidases varies according to a combination of structural components within the active site and its vicinities. For each peroxidase, this redox potential imposes a thermodynamic threshold to the range of oxidizable substrates. However, the instability of enzymatic intermediates during the catalytic cycle precludes the use of direct voltammetry to measure the redox potential of most peroxidases. Here we describe a novel approach to estimate the redox potential of peroxidases, which directly depends on the catalytic performance of the activated enzyme. Selected p-substituted phenols are used as substrates for the estimations. The results obtained with this catalytic approach correlate well with the oxidative capacity predicted by the redox potential of the Fe(III)/Fe(II) couple

  16. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  17. Theoretical approaches to maternal-infant interaction: which approach best discriminates between mothers with and without postpartum depression?

    Science.gov (United States)

    Logsdon, M Cynthia; Mittelberg, Meghan; Morrison, David; Robertson, Ashley; Luther, James F; Wisniewski, Stephen R; Confer, Andrea; Eng, Heather; Sit, Dorothy K Y; Wisner, Katherine L

    2014-12-01

    The purpose of this study was to determine which of the four common approaches to coding maternal-infant interaction best discriminates between mothers with and without postpartum depression. After extensive training, four research assistants coded 83 three minute videotapes of maternal infant interaction at 12month postpartum visits. Four theoretical approaches to coding (Maternal Behavior Q-Sort, the Dyadic Mini Code, Ainsworth Maternal Sensitivity Scale, and the Child-Caregiver Mutual Regulation Scale) were used. Twelve month data were chosen to allow the maximum possible exposure of the infant to maternal depression during the first postpartum year. The videotapes were created in a laboratory with standard procedures. Inter-rater reliabilities for each coding method ranged from .7 to .9. The coders were blind to depression status of the mother. Twenty-seven of the women had major depressive disorder during the 12month postpartum period. Receiver operating characteristics analysis indicated that none of the four methods of analyzing maternal infant interaction discriminated between mothers with and without major depressive disorder. Limitations of the study include the cross-sectional design and the low number of women with major depressive disorder. Further analysis should include data from videotapes at earlier postpartum time periods, and alternative coding approaches should be considered. Nurses should continue to examine culturally appropriate ways in which new mothers can be supported in how to best nurture their babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Numerical Methods Application for Reinforced Concrete Elements-Theoretical Approach for Direct Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Sergiu Ciprian Catinas

    2015-07-01

    Full Text Available A detailed theoretical and practical investigation of the reinforced concrete elements is due to recent techniques and method that are implemented in the construction market. More over a theoretical study is a demand for a better and faster approach nowadays due to rapid development of the calculus technique. The paper above will present a study for implementing in a static calculus the direct stiffness matrix method in order capable to address phenomena related to different stages of loading, rapid change of cross section area and physical properties. The method is a demand due to the fact that in our days the FEM (Finite Element Method is the only alternative to such a calculus and FEM are considered as expensive methods from the time and calculus resources point of view. The main goal in such a method is to create the moment-curvature diagram in the cross section that is analyzed. The paper above will express some of the most important techniques and new ideas as well in order to create the moment curvature graphic in the cross sections considered.

  19. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  20. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  1. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  2. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Science.gov (United States)

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  3. The application of mean field theory to image motion estimation.

    Science.gov (United States)

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  4. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  5. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  6. Organizational approach to estimating public resistance at proposed disposal sites for radioactive and hazardous wastes

    International Nuclear Information System (INIS)

    Payne, B.A.

    1982-01-01

    This paper was intended to present an organizational approach to predicting collective action and then to apply that approach to the issue of siting of a nuclear or other hazardous waste repository. Borrowing largely from two previously developed models (one by Perry et al. at Battelle's Human Affairs Research Center and one by Charles Tilly), I developed a theoretical model. Indicators were identified for many of the variables, but they are not easily measured, requiring a number of decisions on thresholds which were not clarified in the paper. What remains is further discussion of these measurement problems, evaluation of the confirmation status of the propositions, and empirical tests of the model. In the meantime, however, the discussion should provide assessors of public resistance with a theoretical basis for their thinking and a guide to some revealing indicators of the potential for collective action

  7. Theoretical Computer Science

    DEFF Research Database (Denmark)

    2002-01-01

    The proceedings contains 8 papers from the Conference on Theoretical Computer Science. Topics discussed include: query by committee, linear separation and random walks; hardness results for neural network approximation problems; a geometric approach to leveraging weak learners; mind change...

  8. A probabilistic approach to Radiological Environmental Impact Assessment

    International Nuclear Information System (INIS)

    Avila, Rodolfo; Larsson, Carl-Magnus

    2001-01-01

    Since a radiological environmental impact assessment typically relies on limited data and poorly based extrapolation methods, point estimations, as implied by a deterministic approach, do not suffice. To be of practical use for risk management, it is necessary to quantify the uncertainty margins of the estimates as well. In this paper we discuss how to work out a probabilistic approach for dealing with uncertainties in assessments of the radiological risks to non-human biota of a radioactive contamination. Possible strategies for deriving the relevant probability distribution functions from available empirical data and theoretical knowledge are outlined

  9. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    OpenAIRE

    Vjekoslav Klaric

    2011-01-01

    This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the defi nitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the ...

  10. Locating sensors for detecting source-to-target patterns of special nuclear material smuggling: a spatial information theoretic approach.

    Science.gov (United States)

    Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong

    2010-01-01

    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  11. Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Xuesong Zhou

    2010-08-01

    Full Text Available In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.

  12. Automatic Sky View Factor Estimation from Street View Photographs—A Big Data Approach

    Directory of Open Access Journals (Sweden)

    Jianming Liang

    2017-04-01

    Full Text Available Hemispherical (fisheye photography is a well-established approach for estimating the sky view factor (SVF. High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV and Tencent Street View (TSV focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM and an oblique airborne photogrammetry-based 3D city model (OAP3D, resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications.

  13. RiD: A New Approach to Estimate the Insolvency Risk

    Directory of Open Access Journals (Sweden)

    Marco Aurélio dos Santos Sanfins

    2014-10-01

    Full Text Available Given the recent international crises and the increasing number of defaults, several researchers have attempted to develop metrics that calculate the probability of insolvency with higher accuracy. The approaches commonly used, however, do not consider the credit risk nor the severity of the distance between receivables and obligations among different periods. In this paper we mathematically present an approach that allow us to estimate the insolvency risk by considering not only future receivables and obligations, but the severity of the distance between them and the quality of the respective receivables. Using Monte Carlo simulations and hypothetical examples, we show that our metric is able to estimate the insolvency risk with high accuracy. Moreover, our results suggest that in the absence of a smooth distribution between receivables and obligations, there is a non-null insolvency risk even when the present value of receivables is larger than the present value of the obligations.

  14. Exploring super-gaussianity towards robust information-theoretical time delay estimation

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos

    2013-01-01

    the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced...

  15. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  16. The power of theoretical knowledge.

    Science.gov (United States)

    Alligood, Martha Raile

    2011-10-01

    Nursing theoretical knowledge has demonstrated powerful contributions to education, research, administration and professional practice for guiding nursing thought and action. That knowledge has shifted the primary focus of the nurse from nursing functions to the person. Theoretical views of the person raise new questions, create new approaches and instruments for nursing research, and expand nursing scholarship throughout the world.

  17. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  18. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  19. Theoretical approach for plasma series resonance effect in geometrically symmetric dual radio frequency plasma

    International Nuclear Information System (INIS)

    Bora, B.; Bhuyan, H.; Favre, M.; Wyndham, E.; Chuaqui, H.

    2012-01-01

    Plasma series resonance (PSR) effect is well known in geometrically asymmetric capacitively couple radio frequency plasma. However, plasma series resonance effect in geometrically symmetric plasma has not been properly investigated. In this work, a theoretical approach is made to investigate the plasma series resonance effect and its influence on Ohmic and stochastic heating in geometrically symmetric discharge. Electrical asymmetry effect by means of dual frequency voltage waveform is applied to excite the plasma series resonance. The results show considerable variation in heating with phase difference between the voltage waveforms, which may be applicable in controlling the plasma parameters in such plasma.

  20. A holistic approach to age estimation in refugee children.

    Science.gov (United States)

    Sypek, Scott A; Benson, Jill; Spanner, Kate A; Williams, Jan L

    2016-06-01

    Many refugee children arriving in Australia have an inaccurately documented date of birth (DOB). A medical assessment of a child's age is often requested when there is a concern that their documented DOB is incorrect. This study's aim was to assess the accuracy a holistic age assessment tool (AAT) in estimating the age of refugee children newly settled in Australia. A holistic AAT that combines medical and non-medical approaches was used to estimate the ages of 60 refugee children with a known DOB. The tool used four components to assess age: an oral narrative, developmental assessment, anthropometric measures and pubertal assessment. Assessors were blinded to the true age of the child. Correlation coefficients for the actual and estimated age were calculated for the tool overall and individual components. The correlation coefficient between the actual and estimated age from the AAT was very strong at 0.9802 (boys 0.9748, girls 0.9876). The oral narrative component of the tool performed best (R = 0.9603). Overall, 86.7% of age estimates were within 1 year of the true age. The range of differences was -1.43 to 3.92 years with a standard deviation of 0.77 years (9.24 months). The AAT is a holistic, simple and safe instrument that can be used to estimate age in refugee children with results comparable with radiological methods currently used. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  1. Game Theoretical Approach to Supply Chain Microfinance

    OpenAIRE

    Sim , Jaehun; Prabhu , Vittaldas ,

    2013-01-01

    Part 1: Sustainable Production; International audience; This paper considers a supply chain microfinance model in which a manufacturer acts as a lender and a raw material supplier as a borrower. Using a game theoretical analysis, the study investigates how investment levels, raw material prices, and profit margins are influenced by loan interest rates under two types of decentralized channel policies: manufacturer Stackelberg and vertical Nash game. In addition, the study shows how the profit...

  2. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  3. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    Science.gov (United States)

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  4. Model-free information-theoretic approach to infer leadership in pairs of zebrafish.

    Science.gov (United States)

    Butail, Sachit; Mwaffo, Violet; Porfiri, Maurizio

    2016-04-01

    Collective behavior affords several advantages to fish in avoiding predators, foraging, mating, and swimming. Although fish schools have been traditionally considered egalitarian superorganisms, a number of empirical observations suggest the emergence of leadership in gregarious groups. Detecting and classifying leader-follower relationships is central to elucidate the behavioral and physiological causes of leadership and understand its consequences. Here, we demonstrate an information-theoretic approach to infer leadership from positional data of fish swimming. In this framework, we measure social interactions between fish pairs through the mathematical construct of transfer entropy, which quantifies the predictive power of a time series to anticipate another, possibly coupled, time series. We focus on the zebrafish model organism, which is rapidly emerging as a species of choice in preclinical research for its genetic similarity to humans and reduced neurobiological complexity with respect to mammals. To overcome experimental confounds and generate test data sets on which we can thoroughly assess our approach, we adapt and calibrate a data-driven stochastic model of zebrafish motion for the simulation of a coupled dynamical system of zebrafish pairs. In this synthetic data set, the extent and direction of the coupling between the fish are systematically varied across a wide parameter range to demonstrate the accuracy and reliability of transfer entropy in inferring leadership. Our approach is expected to aid in the analysis of collective behavior, providing a data-driven perspective to understand social interactions.

  5. Region innovation and investment development: conceptual theoretical approach and business solutions

    Directory of Open Access Journals (Sweden)

    Zozulya D.M.

    2017-01-01

    Full Text Available The article describes essential problems of the region business innovation and investment development under current conditions, issues of crisis restrictions negotiation and innovation-driven economy formation. The relevance of the research is defined by the need of effective tools creation for business innovation and investment development and support, which can be applied, first, to increase efficiency of the region industrial activity, then improve production competitiveness on the innovative basis, overcome existing problems and provide sustainable innovation development in the region. The results of conducted research are represented in the article including region innovation and investment development concept model made up by the authors on the basis of system theoretical approach. The tools of the region innovation development defined in the concept model are briefly reviewed in the article. The most important of them include engineering marketing (marketing of scientific and technical innovations, strategic planning, benchmarking, place marketing and business process modeling.

  6. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  7. Mechanisms of Enzyme-Catalyzed Reduction of Two Carcinogenic Nitro-Aromatics, 3-Nitrobenzanthrone and Aristolochic Acid I: Experimental and Theoretical Approaches

    Directory of Open Access Journals (Sweden)

    Marie Stiborová

    2014-06-01

    Full Text Available This review summarizes the results found in studies investigating the enzymatic activation of two genotoxic nitro-aromatics, an environmental pollutant and carcinogen 3-nitrobenzanthrone (3-NBA and a natural plant nephrotoxin and carcinogen aristolochic acid I (AAI, to reactive species forming covalent DNA adducts. Experimental and theoretical approaches determined the reasons why human NAD(PH:quinone oxidoreductase (NQO1 and cytochromes P450 (CYP 1A1 and 1A2 have the potential to reductively activate both nitro-aromatics. The results also contributed to the elucidation of the molecular mechanisms of these reactions. The contribution of conjugation enzymes such as N,O-acetyltransferases (NATs and sulfotransferases (SULTs to the activation of 3-NBA and AAI was also examined. The results indicated differences in the abilities of 3-NBA and AAI metabolites to be further activated by these conjugation enzymes. The formation of DNA adducts generated by both carcinogens during their reductive activation by the NOQ1 and CYP1A1/2 enzymes was investigated with pure enzymes, enzymes present in subcellular cytosolic and microsomal fractions, selective inhibitors, and animal models (including knock-out and humanized animals. For the theoretical approaches, flexible in silico docking methods as well as ab initio calculations were employed. The results summarized in this review demonstrate that a combination of experimental and theoretical approaches is a useful tool to study the enzyme-mediated reaction mechanisms of 3-NBA and AAI reduction.

  8. Mechanisms of Enzyme-Catalyzed Reduction of Two Carcinogenic Nitro-Aromatics, 3-Nitrobenzanthrone and Aristolochic Acid I: Experimental and Theoretical Approaches

    Science.gov (United States)

    Stiborová, Marie; Frei, Eva; Schmeiser, Heinz H.; Arlt, Volker M.; Martínek, Václav

    2014-01-01

    This review summarizes the results found in studies investigating the enzymatic activation of two genotoxic nitro-aromatics, an environmental pollutant and carcinogen 3-nitrobenzanthrone (3-NBA) and a natural plant nephrotoxin and carcinogen aristolochic acid I (AAI), to reactive species forming covalent DNA adducts. Experimental and theoretical approaches determined the reasons why human NAD(P)H:quinone oxidoreductase (NQO1) and cytochromes P450 (CYP) 1A1 and 1A2 have the potential to reductively activate both nitro-aromatics. The results also contributed to the elucidation of the molecular mechanisms of these reactions. The contribution of conjugation enzymes such as N,O-acetyltransferases (NATs) and sulfotransferases (SULTs) to the activation of 3-NBA and AAI was also examined. The results indicated differences in the abilities of 3-NBA and AAI metabolites to be further activated by these conjugation enzymes. The formation of DNA adducts generated by both carcinogens during their reductive activation by the NOQ1 and CYP1A1/2 enzymes was investigated with pure enzymes, enzymes present in subcellular cytosolic and microsomal fractions, selective inhibitors, and animal models (including knock-out and humanized animals). For the theoretical approaches, flexible in silico docking methods as well as ab initio calculations were employed. The results summarized in this review demonstrate that a combination of experimental and theoretical approaches is a useful tool to study the enzyme-mediated reaction mechanisms of 3-NBA and AAI reduction. PMID:24918288

  9. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    Directory of Open Access Journals (Sweden)

    Vjekoslav Klarić

    2011-03-01

    Full Text Available This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the definitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the theory behind it, the MIMIC model is then applied to the Croatian economy. Considering the described characteristics of different methods, a previous estimate of the size of the non-observed economy in Croatia is chosen to provide benchmark values for the MIMIC model. Using those, the estimates of the size of non-observed economy in Croatia during the period 1998-2009 are obtained.

  10. Cost estimation: An expert-opinion approach. [cost analysis of research projects using the Delphi method (forecasting)

    Science.gov (United States)

    Buffalano, C.; Fogleman, S.; Gielecki, M.

    1976-01-01

    A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.

  11. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  12. Resilience or Flexibility– A Theoretical Approach on Romanian Development Regions

    Directory of Open Access Journals (Sweden)

    Roxana Voicu – Dorobanțu

    2015-09-01

    Full Text Available The paper describes a theoretical contextualization of flexibility, sustainability, durability and resilience, in the context of the sustainable development goals. The main purpose is to identify the theoretical handles that may be used in the creation of a flexibility indicator. Thus, research questions related to the theoretical differentiation between durable and sustainable, flexible and resilient are answered. Further on, the paper describes the situation of the Romanian regions in terms of development indicators, based on Eurostat data, as a premise for further research on the possibility of their leapfrogging. This work was financially supported through the project “Routes of academic excellence in doctoral and post-doctoral research- REACH” co-financed through the European Social Fund, by Sectoral Operational Programme Human Resources Development 2007-2013, contract no POSDRU/59/1.5/S/137926.

  13. Modified generalized method of moments for a robust estimation of polytomous logistic model

    Directory of Open Access Journals (Sweden)

    Xiaoshan Wang

    2014-07-01

    Full Text Available The maximum likelihood estimation (MLE method, typically used for polytomous logistic regression, is prone to bias due to both misclassification in outcome and contamination in the design matrix. Hence, robust estimators are needed. In this study, we propose such a method for nominal response data with continuous covariates. A generalized method of weighted moments (GMWM approach is developed for dealing with contaminated polytomous response data. In this approach, distances are calculated based on individual sample moments. And Huber weights are applied to those observations with large distances. Mellow-type weights are also used to downplay leverage points. We describe theoretical properties of the proposed approach. Simulations suggest that the GMWM performs very well in correcting contamination-caused biases. An empirical application of the GMWM estimator on data from a survey demonstrates its usefulness.

  14. Estimation of optical rotation of γ-alkylidenebutenolide, cyclopropylamine, cyclopropyl-methanol and cyclopropenone based compounds by a Density Functional Theory (DFT) approach.

    Science.gov (United States)

    Shahzadi, Iram; Shaukat, Aqsa; Zara, Zeenat; Irfan, Muhammad; Eliasson, Bertil; Ayub, Khurshid; Iqbal, Javed

    2017-10-01

    Computing the optical rotation of organic molecules can be a real challenge, and various theoretical approaches have been developed in this regard. A benchmark study of optical rotation of various classes of compounds was carried out by Density Functional Theory (DFT) methods. The aim of the present research study was to find out the best-suited functional and basis set to estimate the optical rotations of selected compounds with respect to experimental literature values. Six DFT functional LSDA, BVP86, CAM-B3LYP, B3PW91, and PBE were applied on 22 different compounds. Furthermore, six different basis sets, i.e., 3-21G, 6-31G, aug-cc-pVDZ, aug-cc-pVTZ, DGDZVP, and DGDZVP2 were also applied with the best-suited functional B3LYP. After rigorous effort, it can be safely said that the best combination of functional and basis set is B3LYP/aug-cc-pVTZ for the estimation of optical rotation for selected compounds. © 2017 Wiley Periodicals, Inc.

  15. Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software.

    Science.gov (United States)

    Lancsar, Emily; Fiebig, Denzil G; Hole, Arne Risa

    2017-07-01

    We provide a user guide on the analysis of data (including best-worst and best-best data) generated from discrete-choice experiments (DCEs), comprising a theoretical review of the main choice models followed by practical advice on estimation and post-estimation. We also provide a review of standard software. In providing this guide, we endeavour to not only provide guidance on choice modelling but to do so in a way that provides a 'way in' for researchers to the practicalities of data analysis. We argue that choice of modelling approach depends on the research questions, study design and constraints in terms of quality/quantity of data and that decisions made in relation to analysis of choice data are often interdependent rather than sequential. Given the core theory and estimation of choice models is common across settings, we expect the theoretical and practical content of this paper to be useful to researchers not only within but also beyond health economics.

  16. Novel approaches to the estimation of intake and bioavailability of radiocaesium in ruminants grazing forested areas

    International Nuclear Information System (INIS)

    Mayes, R.W.; Lamb, C.S.; Beresford, N.A.

    1994-01-01

    It is difficult to measure transfer of radiocaesium to the tissues of forest ruminants because they can potentially ingest a wide range of plant types. Measurements on undomesticated forest ruminants incur further difficulties. Existing techniques of estimating radiocaesium intake are imprecise when applied to forest systems. New approaches to measure this parameter are discussed. Two methods of intake estimation are described and evaluated. In the first method, radiocaesium intake is estimated from the radiocaesium activity concentrations of plants, combined with estimates of dry-matter (DM) intake and plant species composition of the diet, using plant and orally-dosed hydrocarbons (n-alkanes) as markers. The second approach estimates the total radiocaesium intake of an animal from the rate of excretion of radiocaesium in the faeces and an assumed value for the apparent absorption coefficient. Estimates of radiocaesium intake, using these approaches, in lactating goats and adult sheep were used to calculate transfer coefficients for milk and muscle; these compared favourably with transfer coefficients previously obtained under controlled experimental conditions. Potential variations in bioavailability of dietary radiocaesium sources to forest ruminants have rarely been considered. Approaches that can be used to describe bioavailability, including the true absorption coefficient and in vitro extractability, are outlined

  17. Universal Approach to Estimate Perfluorocarbons Emissions During Individual High-Voltage Anode Effect for Prebaked Cell Technologies

    Science.gov (United States)

    Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia

    2018-04-01

    Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.

  18. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  19. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  20. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  1. A Theoretical Analysis of the Mission Statement Based on the Axiological Approach

    Directory of Open Access Journals (Sweden)

    Marius-Costel EŞI

    2016-12-01

    Full Text Available The aim of this work is focused on a theoretical analysis of formulating the mission statement of business organizations in relation to the idea of the organizational axiological core. On one hand, we consider the CSR-Corporate Social Responsibility which, in our view, must be brought into direct connection both with the moral entrepreneurship (which should support the philosophical perspective of the statement of business organizations mission and the purely economic entrepreneurship based on profit maximization (which should support the pragmatic perspective. On the other hand, an analysis of the moral concepts which should underpin business is becoming fundamental, in our view, as far as the idea of the social specific value of the social entrepreneurship is evidenced. Therefore, our approach highlights a number of epistemic explanations in relation to the actual practice dimension.

  2. Modified economic order quantity (EOQ model for items with imperfect quality: Game-theoretical approaches

    Directory of Open Access Journals (Sweden)

    Milad Elyasi

    2014-04-01

    Full Text Available In the recent decade, studying the economic order quantity (EOQ models with imperfect quality has appealed to many researchers. Only few papers are published discussing EOQ models with imperfect items in a supply chain. In this paper, a two-echelon decentralized supply chain consisting of a manufacture and a supplier that both face just in time (JIT inventory problem is considered. It is sought to find the optimal number of the shipments and the quantity of each shipment in a way that minimizes the both manufacturer’s and the supplier’s cost functions. To the authors’ best knowledge, this is the first paper that deals with imperfect items in a decentralized supply chain. Thereby, three different game theoretical solution approaches consisting of two non-cooperative games and a cooperative game are proposed. Comparing the results of three different scenarios with those of the centralized model, the conclusions are drawn to obtain the best approach.

  3. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  4. Theoretical analysis of heat transfer in, and electrical performance of, a milliwatt radioisotopic powered thermoelectric generator

    International Nuclear Information System (INIS)

    Biver, C.J.

    1975-01-01

    A simplified, theoretical model has been made for a radioisotope-powered milliwatt thermoelectric generator (RTG). Calculations of unit heat transfer and electrical performance characteristics are made in two ways: (a) using discrete values of input physical parameters for an individual unit; and (b) using a statistical simulation (Monte Carlo) approach for estimating the variation in performance in a group of N-units. The statistical simulation approach is useful in: (a) estimating the allowable range of input parameters conducive to the production design meeting specifications in a group of N-units; and (b) determining particular parameters that must be significantly restricted in variation to achieve desired performance. The available experimental data, as compared with the discrete value calculations, are in quite good agreement (within 5 percent generally). (U.S.)

  5. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  6. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  7. Unsteady force estimation using a Lagrangian drift-volume approach

    Science.gov (United States)

    McPhaden, Cameron J.; Rival, David E.

    2018-04-01

    A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.

  8. Estimation of stature from hand impression: a nonconventional approach.

    Science.gov (United States)

    Ahemad, Nasir; Purkait, Ruma

    2011-05-01

    Stature is used for constructing a biological profile that assists with the identification of an individual. So far, little attention has been paid to the fact that stature can be estimated from hand impressions left at scene of crime. The present study based on practical observations adopted a new methodology of measuring hand length from the depressed area between hypothenar and thenar region on the proximal surface of the palm. Stature and bilateral hand impressions were obtained from 503 men of central India. Seventeen dimensions of hand were measured on the impression. Linear regression equations derived showed hand length followed by palm length are best estimates of stature. Testing the practical utility of the suggested method on latent prints of 137 subjects, a statistically insignificant result was obtained when known and estimated stature derived from latent prints was compared. The suggested approach points to a strong possibility of its usage in crime scene investigation, albeit the fact that validation studies in real-life scenarios are performed. © 2011 American Academy of Forensic Sciences.

  9. Experimental validation of pulsed column inventory estimators

    International Nuclear Information System (INIS)

    Beyerlein, A.L.; Geldard, J.F.; Weh, R.; Eiben, K.; Dander, T.; Hakkila, E.A.

    1991-01-01

    Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may be an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs

  10. An information-theoretic approach to assess practical identifiability of parametric dynamical systems.

    Science.gov (United States)

    Pant, Sanjay; Lombardi, Damiano

    2015-10-01

    A new approach for assessing parameter identifiability of dynamical systems in a Bayesian setting is presented. The concept of Shannon entropy is employed to measure the inherent uncertainty in the parameters. The expected reduction in this uncertainty is seen as the amount of information one expects to gain about the parameters due to the availability of noisy measurements of the dynamical system. Such expected information gain is interpreted in terms of the variance of a hypothetical measurement device that can measure the parameters directly, and is related to practical identifiability of the parameters. If the individual parameters are unidentifiable, correlation between parameter combinations is assessed through conditional mutual information to determine which sets of parameters can be identified together. The information theoretic quantities of entropy and information are evaluated numerically through a combination of Monte Carlo and k-nearest neighbour methods in a non-parametric fashion. Unlike many methods to evaluate identifiability proposed in the literature, the proposed approach takes the measurement-noise into account and is not restricted to any particular noise-structure. Whilst computationally intensive for large dynamical systems, it is easily parallelisable and is non-intrusive as it does not necessitate re-writing of the numerical solvers of the dynamical system. The application of such an approach is presented for a variety of dynamical systems--ranging from systems governed by ordinary differential equations to partial differential equations--and, where possible, validated against results previously published in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey

    2014-01-06

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  12. Multilevel variance estimators in MLMC and application for random obstacle problems

    KAUST Repository

    Chernov, Alexey; Bierig, Claudio

    2014-01-01

    The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

  13. Multifractal rainfall extremes: Theoretical analysis and practical estimation

    International Nuclear Information System (INIS)

    Langousis, Andreas; Veneziano, Daniele; Furcolo, Pierluigi; Lepore, Chiara

    2009-01-01

    We study the extremes generated by a multifractal model of temporal rainfall and propose a practical method to estimate the Intensity-Duration-Frequency (IDF) curves. The model assumes that rainfall is a sequence of independent and identically distributed multiplicative cascades of the beta-lognormal type, with common duration D. When properly fitted to data, this simple model was found to produce accurate IDF results [Langousis A, Veneziano D. Intensity-duration-frequency curves from scaling representations of rainfall. Water Resour Res 2007;43. (doi:10.1029/2006WR005245)]. Previous studies also showed that the IDF values from multifractal representations of rainfall scale with duration d and return period T under either d → 0 or T → ∞, with different scaling exponents in the two cases. We determine the regions of the (d, T)-plane in which each asymptotic scaling behavior applies in good approximation, find expressions for the IDF values in the scaling and non-scaling regimes, and quantify the bias when estimating the asymptotic power-law tail of rainfall intensity from finite-duration records, as was often done in the past. Numerically calculated exact IDF curves are compared to several analytic approximations. The approximations are found to be accurate and are used to propose a practical IDF estimation procedure.

  14. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    An integrated approach to estimate the storage reliability is proposed. • A non-parametric measure to estimate the number of failures and the reliability at each testing time is presented. • E-Baysian method to estimate the failure probability is introduced. • The possible initial failures in storage are introduced. • The non-parametric estimates of failure numbers can be used into the parametric models.

  15. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    Science.gov (United States)

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Performance evaluation of the spectral centroid downshift method for attenuation estimation.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2015-05-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.

  17. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  18. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    International Nuclear Information System (INIS)

    Hamza, A.; Jan, T.; Ali, A.

    2016-01-01

    In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)

  19. Signal recognition and parameter estimation of BPSK-LFM combined modulation

    Science.gov (United States)

    Long, Chao; Zhang, Lin; Liu, Yu

    2015-07-01

    Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.

  20. Microbial growth yield estimates from thermodynamics and its importance for degradation of pesticides and formation of biogenic non-extractable residues

    DEFF Research Database (Denmark)

    Brock, Andreas Libonati; Kästner, M.; Trapp, Stefan

    2017-01-01

    NER. Formation of microbial mass can be estimated from the microbial growth yield, but experimental data is rare. Instead, we suggest using prediction methods for the theoretical yield based on thermodynamics. Recently, we presented the Microbial Turnover to Biomass (MTB) method that needs a minimum...... and using the released CO2 as a measure for microbial activity, we predicted a range for the formation of biogenic NER. For the majority of the pesticides, a considerable fraction of the NER was estimated to be biogenic. This novel approach provides a theoretical foundation applicable to the evaluation...

  1. A new approach to a global fit of the CKM matrix

    Energy Technology Data Exchange (ETDEWEB)

    Hoecker, A.; Lacker, H.; Laplace, S. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Le Diberder, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, 75 - Paris (France)

    2001-05-01

    We report on a new approach to a global CKM matrix analysis taking into account most recent experimental and theoretical results. The statistical framework (Rfit) developed in this paper advocates frequentist statistics. Other approaches, such as Bayesian statistics or the 95% CL scan method are also discussed. We emphasize the distinction of a model testing and a model dependent, metrological phase in which the various parameters of the theory are estimated. Measurements and theoretical parameters entering the global fit are thoroughly discussed, in particular with respect to their theoretical uncertainties. Graphical results for confidence levels are drawn in various one and two-dimensional parameter spaces. Numerical results are provided for all relevant CKM parameterizations, the CKM elements and theoretical input parameters. Predictions for branching ratios of rare K and B meson decays are obtained. A simple, predictive SUSY extension of the Standard Model is discussed. (authors)

  2. An approach to the estimation of the value of agricultural residues used as biofuels

    International Nuclear Information System (INIS)

    Kumar, A.; Purohit, P.; Rana, S.; Kandpal, T.C.

    2002-01-01

    A simple demand side approach for estimating the monetary value of agricultural residues used as biofuels is proposed. Some of the important issues involved in the use of biomass feedstocks in coal-fired boilers are briefly discussed along with their implications for the maximum acceptable price estimates for the agricultural residues. Results of some typical calculations are analysed along with the estimates obtained on the basis of a supply side approach (based on production cost) developed earlier. The prevailing market prices of some agricultural residues used as feedstocks for briquetting are also indicated. The results obtained can be used as preliminary indicators for identifying niche areas for immediate/short-term utilization of agriculture residues in boilers for process heating and power generation. (author)

  3. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  4. Combination of real options and game-theoretic approach in investment analysis

    Science.gov (United States)

    Arasteh, Abdollah

    2016-09-01

    Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.

  5. Theoretical approach to the institutionalization of forms of governance resource provision of innovative activity

    Directory of Open Access Journals (Sweden)

    M. S. Asmolova

    2016-01-01

    Full Text Available Knowledge economy research due to the actualization of the role of knowledge and information. Management, its impact and the institutionalization of management resource provision designed to overcome the problems inherent in the present stage of development. An important research direction is to carry out theoretical analysis of economic resources in the context of their occurrence, development and improvement. This assertion has identified the need to consider the theoretical approach to the institutionalization of forms of resource management software innovation and analysis and typology of approaches by different parameters on the basis of analysis of a large number of sources. The features of the concept of institutionalization as defined phenomenon in a time perspective. In an analysis conducted by scientists used studies from different periods in the development of economic science. The analysis of numerous professional and scientific research led to the conclusion that knowledge and information should be dis-regarded as a new type of economic production factors. Separately, analyzed the impact of globalization processes that have affected the scientific and innovative sphere. Allocated to a separate study by side issues of innovative development of the Russian economy, which prevents the unresolved improve the competitiveness of the national economic and inhibits the formation of regional and national innovation system, restraining the transition to an innovative model of development. Citing as evidence of the deepening of economic globalization, the role of new information technologies and the formation of a single information space. Noting the fact that if the earlier science developed to deepen knowledge on the basis of the social division of Sciences, in the coming century should happen deepening of knowledge on the basis of their socialization.

  6. Theoretical study of molecular vibrations in electron momentum spectroscopy experiments on furan: An analytical versus a molecular dynamical approach

    International Nuclear Information System (INIS)

    Morini, Filippo; Deleuze, Michael S.; Watanabe, Noboru; Takahashi, Masahiko

    2015-01-01

    The influence of thermally induced nuclear dynamics (molecular vibrations) in the initial electronic ground state on the valence orbital momentum profiles of furan has been theoretically investigated using two different approaches. The first of these approaches employs the principles of Born-Oppenheimer molecular dynamics, whereas the so-called harmonic analytical quantum mechanical approach resorts to an analytical decomposition of contributions arising from quantized harmonic vibrational eigenstates. In spite of their intrinsic differences, the two approaches enable consistent insights into the electron momentum distributions inferred from new measurements employing electron momentum spectroscopy and an electron impact energy of 1.2 keV. Both approaches point out in particular an appreciable influence of a few specific molecular vibrations of A 1 symmetry on the 9a 1 momentum profile, which can be unravelled from considerations on the symmetry characteristics of orbitals and their energy spacing

  7. An EKF-based approach for estimating leg stiffness during walking.

    Science.gov (United States)

    Ochoa-Diaz, Claudia; Menegaz, Henrique M; Bó, Antônio P L; Borges, Geovany A

    2013-01-01

    The spring-like behavior is an inherent condition for human walking and running. Since leg stiffness k(leg) is a parameter that cannot be directly measured, many techniques has been proposed in order to estimate it, most of them using force data. This paper intends to address this problem using an Extended Kalman Filter (EKF) based on the Spring-Loaded Inverted Pendulum (SLIP) model. The formulation of the filter only uses as measurement information the Center of Mass (CoM) position and velocity, no a priori information about the stiffness value is known. From simulation results, it is shown that the EKF-based approach can generate a reliable stiffness estimation for walking.

  8. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    Science.gov (United States)

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Parametric estimation in the wave buoy analogy - an elaborated approach based on energy considerations

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam

    2014-01-01

    the ship’s wave-induced responses based on different statistical inferences including parametric and non-parametric approaches. This paper considers a concept to improve the estimate obtained by the parametric method for sea state estimation. The idea is illustrated by an analysis made on full-scale...

  10. Estimation of petroleum assets: contribution of the options theory

    International Nuclear Information System (INIS)

    Chesney, M.

    1999-01-01

    With the development of data on real options, the theoretical and practical research on projects estimation has advanced considerably. The analogy between investment advisabilities and options allow to take into account the flexibility which is inherent in any project as well as the choices which are available to investors. The advantages of this approach are shown in this paper. An example of application in the field of petroleum industry is given. (O.M.)

  11. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    David A. Swanson

    2012-07-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  12. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    Lucky Tedrow

    2012-01-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  13. Aspects of using a best-estimate approach for VVER safety analysis in reactivity initiated accidents

    Energy Technology Data Exchange (ETDEWEB)

    Ovdiienko, Iurii; Bilodid, Yevgen; Ieremenko, Maksym [State Scientific and Technical Centre on Nuclear and Radiation, Safety (SSTC N and RS), Kyiv (Ukraine); Loetsch, Thomas [TUEV SUED Industrie Service GmbH, Energie und Systeme, Muenchen (Germany)

    2016-09-15

    At present time, Ukraine faces the problem of small margins of acceptance criteria in connection with the implementation of a conservative approach for safety evaluations. The problem is particularly topical conducting feasibility analysis of power up-rating for Ukrainian nuclear power plants. Such situation requires the implementation of a best-estimate approach on the basis of an uncertainty analysis. For some kind of accidents, such as loss-of-coolant accident (LOCA), the best estimate approach is, more or less, developed and established. However, for reactivity initiated accident (RIA) analysis an application of best estimate method could be problematical. A regulatory document in Ukraine defines a nomenclature of neutronics calculations and so called ''generic safety parameters'' which should be used as boundary conditions for all VVER-1000 (V-320) reactors in RIA analysis. In this paper the ideas of uncertainty evaluations of generic safety parameters in RIA analysis in connection with the use of the 3D neutron kinetic code DYN3D and the GRS SUSA approach are presented.

  14. Remaining Useful Life Estimation of Li-ion Battery for Energy Storage System Using Markov Chain Monte Carlo Method

    International Nuclear Information System (INIS)

    Kim, Dongjin; Kim, Seok Goo; Choi, Jooho; Lee, Jaewook; Song, Hwa Seob; Park, Sang Hui

    2016-01-01

    Remaining useful life (RUL) estimation of the Li-ion battery has gained great interest because it is necessary for quality assurance, operation planning, and determination of the exchange period. This paper presents the RUL estimation of an Li-ion battery for an energy storage system using exponential function for the degradation model and Markov Chain Monte Carlo (MCMC) approach for parameter estimation. The MCMC approach is dependent upon information such as model initial parameters and input setting parameters which highly affect the estimation result. To overcome this difficulty, this paper offers a guideline for model initial parameters based on the regression result, and MCMC input parameters derived by comparisons with a thorough search of theoretical results

  15. Remaining Useful Life Estimation of Li-ion Battery for Energy Storage System Using Markov Chain Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongjin; Kim, Seok Goo; Choi, Jooho; Lee, Jaewook [Korea Aerospace Univ., Koyang (Korea, Republic of); Song, Hwa Seob; Park, Sang Hui [Hyosung Corporation, Seoul (Korea, Republic of)

    2016-10-15

    Remaining useful life (RUL) estimation of the Li-ion battery has gained great interest because it is necessary for quality assurance, operation planning, and determination of the exchange period. This paper presents the RUL estimation of an Li-ion battery for an energy storage system using exponential function for the degradation model and Markov Chain Monte Carlo (MCMC) approach for parameter estimation. The MCMC approach is dependent upon information such as model initial parameters and input setting parameters which highly affect the estimation result. To overcome this difficulty, this paper offers a guideline for model initial parameters based on the regression result, and MCMC input parameters derived by comparisons with a thorough search of theoretical results.

  16. THEORETICAL AND METHODOLOGICAL APPROACHES TO REGIONAL COMPETITION INVESTIGATION

    Directory of Open Access Journals (Sweden)

    A.I. Tatarkin

    2006-03-01

    Full Text Available The article is dedicated to theoretical-methodological issues of regional economy competitiveness investigation. Economic essence of regional competitiveness is analyzed, its definition is given. The factors that determine relations of competition on medium and macrolevels are proved. The basic differences between world-economical and inter-regional communications are formulated. The specific features of globalization processes as form of competitive struggle are considered.

  17. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  18. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  19. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    Science.gov (United States)

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. The strong coupling constant: its theoretical derivation from a geometric approach to hadron structure

    International Nuclear Information System (INIS)

    Recami, E.; Tonin-Zanchin, V.

    1991-01-01

    Since more than a decade, a bi-scale, unified approach to strong and gravitational interactions has been proposed, that uses the geometrical methods of general relativity, and yielded results similar to strong gravity theory's. We fix our attention, in this note, on hadron structure, and show that also the strong interaction strength α s, ordinarily called the (perturbative) coupling-constant square, can be evaluated within our theory, and found to decrease (increase) as the distance r decreases (increases). This yields both the confinement of the hadron constituents for large values of r, and their asymptotic freedom [for small values of r inside the hadron]: in qualitative agreement with the experimental evidence. In other words, our approach leads us, on a purely theoretical ground, to a dependence of α s on r which had been previously found only on phenomenological and heuristical grounds. We expect the above agreement to be also quantitative, on the basis of a few checks performed in this paper, and of further work of ours about calculating meson mass-spectra. (author)

  1. H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.

  2. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  3. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    Science.gov (United States)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  4. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    International Nuclear Information System (INIS)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji

    2010-01-01

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  5. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    Energy Technology Data Exchange (ETDEWEB)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji [Department of Economics, University of Ibadan, Ibadan (Nigeria)

    2010-01-15

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  6. FRACTURE MECHANICS APPROACH TO ESTIMATE FATIGUE LIVES OF WELDED LAP-SHEAR SPECIMENS

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.; Michigan, J.

    2014-04-25

    A full range of stress intensity factor solutions for a kinked crack is developed as a function of weld width and the sheet thickness. When used with the associated main crack solutions (global stress intensity factors) in terms of the applied load and specimen geometry, the fatigue lives can be estimated for the laser-welded lap-shear specimens. The estimations are in good agreement with the experimental data. A classical solution for an infinitesimal kink is also employed in the approach. However, the life predictions tend to overestimate the actual fatigue lives. The traditional life estimations with the structural stress along with the experimental stress-fatigue life data (S-N curve) are also provided. In this case, the estimations only agree with the experimental data under higher load conditions.

  7. Information-theoretic methods for estimating of complicated probability distributions

    CERN Document Server

    Zong, Zhi

    2006-01-01

    Mixing up various disciplines frequently produces something that are profound and far-reaching. Cybernetics is such an often-quoted example. Mix of information theory, statistics and computing technology proves to be very useful, which leads to the recent development of information-theory based methods for estimating complicated probability distributions. Estimating probability distribution of a random variable is the fundamental task for quite some fields besides statistics, such as reliability, probabilistic risk analysis (PSA), machine learning, pattern recognization, image processing, neur

  8. A Game Theoretic Approach for Balancing Energy Consumption in Clustered Wireless Sensor Networks.

    Science.gov (United States)

    Yang, Liu; Lu, Yinzhi; Xiong, Lian; Tao, Yang; Zhong, Yuanchang

    2017-11-17

    Clustering is an effective topology control method in wireless sensor networks (WSNs), since it can enhance the network lifetime and scalability. To prolong the network lifetime in clustered WSNs, an efficient cluster head (CH) optimization policy is essential to distribute the energy among sensor nodes. Recently, game theory has been introduced to model clustering. Each sensor node is considered as a rational and selfish player which will play a clustering game with an equilibrium strategy. Then it decides whether to act as the CH according to this strategy for a tradeoff between providing required services and energy conservation. However, how to get the equilibrium strategy while maximizing the payoff of sensor nodes has rarely been addressed to date. In this paper, we present a game theoretic approach for balancing energy consumption in clustered WSNs. With our novel payoff function, realistic sensor behaviors can be captured well. The energy heterogeneity of nodes is considered by incorporating a penalty mechanism in the payoff function, so the nodes with more energy will compete for CHs more actively. We have obtained the Nash equilibrium (NE) strategy of the clustering game through convex optimization. Specifically, each sensor node can achieve its own maximal payoff when it makes the decision according to this strategy. Through plenty of simulations, our proposed game theoretic clustering is proved to have a good energy balancing performance and consequently the network lifetime is greatly enhanced.

  9. A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data.

    Science.gov (United States)

    Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio

    2018-01-01

    Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.

  10. Research in theoretical nuclear physics

    International Nuclear Information System (INIS)

    Udagawa, T.

    1993-11-01

    This report describes the accomplishments in basic research in nuclear physics carried out by the theoretical nuclear physics group in the Department of Physics at the University of Texas at Austin, during the period of November 1, 1992 to October 31, 1993. The work done covers three separate areas, low-energy nuclear reactions, intermediate energy physics, and nuclear structure studies. Although the subjects are thus spread among different areas, they are based on two techniques developed in previous years. These techniques are a powerful method for continuum-random-phase-approximation (CRPA) calculations of nuclear response and the breakup-fusion (BF) approach to incomplete fusion reactions, which calculation on a single footing of various incomplete fusion reaction cross sections within the framework of direct reaction theories. The approach was developed as a part of a more general program for establishing an approach to describing all different types of nuclear reactions, i.e., complete fusion, incomplete fusion and direct reactions, in a systematic way based on single theoretical framework

  11. Different approaches to estimation of reactor pressure vessel material embrittlement

    Directory of Open Access Journals (Sweden)

    V. M. Revka

    2013-03-01

    Full Text Available The surveillance test data for the nuclear power plant which is under operation in Ukraine have been used to estimate WWER-1000 reactor pressure vessel (RPV material embrittlement. The beltline materials (base and weld metal were characterized using Charpy impact and fracture toughness test methods. The fracture toughness test data were analyzed according to the standard ASTM 1921-05. The pre-cracked Charpy specimens were tested to estimate a shift of reference temperature T0 due to neutron irradiation. The maximum shift of reference temperature T0 is 84 °C. A radiation embrittlement rate AF for the RPV material was estimated using fracture toughness test data. In addition the AF factor based on the Charpy curve shift (ΔTF has been evaluated. A comparison of the AF values estimated according to different approaches has shown there is a good agreement between the radiation shift of Charpy impact and fracture toughness curves for weld metal with high nickel content (1,88 % wt. Therefore Charpy impact test data can be successfully applied to estimate the fracture toughness curve shift and therefore embrittlement rate. Furthermore it was revealed that radiation embrittlement rate for weld metal is higher than predicted by a design relationship. The enhanced embrittlement is most probably related to simultaneously high nickel and high manganese content in weld metal.

  12. Approach of the estimation for the highest energy of the gamma rays

    International Nuclear Information System (INIS)

    Dumitrescu, Gheorghe

    2004-01-01

    In the last decade there was under debate the issue concerning the composition of the ultra high energy cosmic rays and some authors suggested that the light composition seems to be a relating issue. There was another debate concerning the limit of the energy of gamma rays. The bottom-up approaches suggest a limit at 10 15 eV. Some top-down approaches rise this limit at about 10 20 eV or above. The present paper provides an approach to estimate the limit of the energy of gamma rays using the recent paper of Claus W. Turtur. (author)

  13. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  14. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  15. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  16. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  17. Value drivers: an approach for estimating health and disease management program savings.

    Science.gov (United States)

    Phillips, V L; Becker, Edmund R; Howard, David H

    2013-12-01

    Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.

  18. A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes

    Science.gov (United States)

    Chow, Winston T. L.; Salamanca, Francisco; Georgescu, Matei; Mahalov, Alex; Milne, Jeffrey M.; Ruddell, Benjamin L.

    2014-12-01

    A multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy - regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∼13 W m-2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∼50 W m-2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence.

  19. Aperture Array Photonic Metamaterials: Theoretical approaches, numerical techniques and a novel application

    Science.gov (United States)

    Lansey, Eli

    Optical or photonic metamaterials that operate in the infrared and visible frequency regimes show tremendous promise for solving problems in renewable energy, infrared imaging, and telecommunications. However, many of the theoretical and simulation techniques used at lower frequencies are not applicable to this higher-frequency regime. Furthermore, technological and financial limitations of photonic metamaterial fabrication increases the importance of reliable theoretical models and computational techniques for predicting the optical response of photonic metamaterials. This thesis focuses on aperture array metamaterials. That is, a rectangular, circular, or other shaped cavity or hole embedded in, or penetrating through a metal film. The research in the first portion of this dissertation reflects our interest in developing a fundamental, theoretical understanding of the behavior of light's interaction with these aperture arrays, specifically regarding enhanced optical transmission. We develop an approximate boundary condition for metals at optical frequencies, and a comprehensive, analytical explanation of the physics underlying this effect. These theoretical analyses are augmented by computational techniques in the second portion of this thesis, used both for verification of the theoretical work, and solving more complicated structures. Finally, the last portion of this thesis discusses the results from designing, fabricating and characterizing a light-splitting metamaterial.

  20. External Validity in the Study of Human Development: Theoretical and Methodological Issues

    Science.gov (United States)

    Hultsch, David F.; Hickey, Tom

    1978-01-01

    An examination of the concept of external validity from two theoretical perspectives: a traditional mechanistic approach and a dialectical organismic approach. Examines the theoretical and methodological implications of these perspectives. (BD)

  1. ANALYSIS OF THEORETICAL AND METHODOLOGICAL APPROACHES TO DESIGN OF ELECTRONIC TEXTBOOKS FOR STUDENTS OF HIGHER AGRICULTURAL EDUCATIONAL INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2017-06-01

    Full Text Available The article deals with theoretical and methodological approaches to the design of electronic textbook, in particular systems, competence, activity, personality oriented, technological one, that in complex reflect the general trends in the formation of a new educational paradigm, distinctive features of which lie in constructing the heuristic searching model of the learning process, focusing on developmental teaching, knowledge integration, skills development for the independent information search and processing, technification of the learning process. The approach in this study is used in a broad sense as a synthesis of the basic ideas, views, principles that determine the overall research strategy. The main provisions of modern approaches to design are not antagonistic, they should be applied in a complex, taking into account the advantages of each of them and leveling shortcomings for the development of optimal concept of electronic textbook. The model of electronic textbook designing and components of methodology for its using based on these approaches are described.

  2. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    OpenAIRE

    Adrian Jinich; Dmitrij Rappoport; Ian Dunn; Benjamin Sanchez-Lengeling; Roberto Olivares-Amaya; Elad Noor; Arren Bar Even; Alán Aspuru-Guzik

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfe...

  3. Theoretical vs. measured risk estimates for the external exposure to ionizing radiation pathway - a case study of a major industrial site

    International Nuclear Information System (INIS)

    Dundon, S.T.

    1996-01-01

    Two methods of estimating the risk to industrial receptors to ionizing radiation are presented here. The first method relies on the use of the U.S. Environmental Protection Agency (EPA) external exposure slope factor combined with default exposure parameters for industrial land uses. The second method employs measured exposure rate date and site-specific exposure durations combined with the BEIR V radiological risk coefficient to estimate occupational risk. The uncertainties in each method are described qualitatively. Site-specific information was available for the exposure duration and the exposure frequency as well as historic dosimetry information. Risk estimates were also generated for the current regulatory cleanup level (removal risks included) and for a no action scenario. The study showed that uncertainties for risks calculated using measured exposure rates and site-specific exposure parameters were much lower and defendable than using EPA slope factors combined with default exposure parameters. The findings call into question the use of a uniform cleanup standard for depleted uranium that does not account for site-specific land uses and relies on theoretical models rather than measured exposure rate information

  4. CLUSTER DEVELOPMENT OF ECONOMY OF REGION: THEORETICAL OPPORTUNITIES AND PRACTICAL EXPERIENCE

    Directory of Open Access Journals (Sweden)

    O.A. Romanova

    2007-12-01

    Full Text Available In clause theoretical approaches to formation industrial cluster кластеров in regions of the Russian Federation are considered. Оn the basis of which the methodological scheme of the project of cluster creation is offered. On an example hi-tech cluster “Titanic valley”, created in Sverdlovsk area, basic elements of its formation reveal: a substantiation of use cluster forms of the organization of business, an estimation of preconditions of creation, the description of the cluster purposes, problems, structures; mechanism of management and stages of realization of the project of cluster creation, measures of the state support.

  5. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  6. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  7. Estimating a WTP-based value of a QALY: the 'chained' approach.

    Science.gov (United States)

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A game-theoretic framework for estimating a health purchaser's willingness-to-pay for health and for expansion.

    Science.gov (United States)

    Yaesoubi, Reza; Roberts, Stephen D

    2010-12-01

    A health purchaser's willingness-to-pay (WTP) for health is defined as the amount of money the health purchaser (e.g. a health maximizing public agency or a profit maximizing health insurer) is willing to spend for an additional unit of health. In this paper, we propose a game-theoretic framework for estimating a health purchaser's WTP for health in markets where the health purchaser offers a menu of medical interventions, and each individual in the population selects the intervention that maximizes her prospect. We discuss how the WTP for health can be employed to determine medical guidelines, and to price new medical technologies, such that the health purchaser is willing to implement them. The framework further introduces a measure for WTP for expansion, defined as the amount of money the health purchaser is willing to pay per person in the population served by the health provider to increase the consumption level of the intervention by one percent without changing the intervention price. This measure can be employed to find how much to invest in expanding a medical program through opening new facilities, advertising, etc. Applying the proposed framework to colorectal cancer screening tests, we estimate the WTP for health and the WTP for expansion of colorectal cancer screening tests for the 2005 US population.

  9. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  10. Hydropower assessment of Bolivia—A multisource satellite data and hydrologic modeling approach

    Science.gov (United States)

    Velpuri, Naga Manohar; Pervez, Shahriar; Cushing, W. Matthew

    2016-11-28

    This study produced a geospatial database for use in a decision support system by the Bolivian authorities to investigate further development and investment potentials in sustainable hydropower in Bolivia. The study assessed theoretical hydropower of all 1-kilometer (km) stream segments in the country using multisource satellite data and a hydrologic modeling approach. With the assessment covering the 2 million square kilometer (km2) region influencing Bolivia’s drainage network, the potential hydropower figures are based on theoretical yield assuming that the systems generating the power are 100 percent efficient. There are several factors to consider when determining the real-world or technical power potential of a hydropower system, and these factors can vary depending on local conditions. Since this assessment covers a large area, it was necessary to reduce these variables to the two that can be modeled consistently throughout the region, streamflow or discharge, and elevation drop or head. First, the Shuttle Radar Topography Mission high-resolution 30-meter (m) digital elevation model was used to identify stream segments with greater than 10 km2 of upstream drainage. We applied several preconditioning processes to the 30-m digital elevation model to reduce errors and improve the accuracy of stream delineation and head height estimation. A total of 316,500 1-km stream segments were identified and used in this study to assess the total theoretical hydropower potential of Bolivia. Precipitation observations from a total of 463 stations obtained from the Bolivian Servicio Nacional de Meteorología e Hidrología (Bolivian National Meteorology and Hydrology Service) and the Brazilian Agência Nacional de Águas (Brazilian National Water Agency) were used to validate six different gridded precipitation estimates for Bolivia obtained from various sources. Validation results indicated that gridded precipitation estimates from the Tropical Rainfall Measuring Mission

  11. A combined experimental and theoretical approach to establish the relationship between shear force and clay platelet delamination in melt-processed polypropylene nanocomposites

    CSIR Research Space (South Africa)

    Bandyopadhyay, J

    2014-04-01

    Full Text Available In this article, a combined experimental and theoretical approach has been proposed to establish a relationship between the required shear force and the degree of delamination of clay tactoids during the melt-processing of polymer nanocomposites...

  12. Theoretical and methodological approaches to economic competitiveness (Part I

    Directory of Open Access Journals (Sweden)

    Macari Vadim

    2013-01-01

    Full Text Available The article is based on study of several representative bibliographical sources,it is tried to examine, to order from logical and scientific point of view some of the most common theoretical and methodological understandings of the essence, definition, phenomenon, types, haracteristics and indices of economic competitiveness.

  13. Theoretical and methodological approaches to economic competitiveness (part II

    Directory of Open Access Journals (Sweden)

    Macari Vadim

    2013-02-01

    Full Text Available This article, on the basis of the study of many representative bibliographic sources, examines and tries to order from logical and scientific point of view some of the most common theoretical and methodological treatments of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  14. THEORETICAL AND METHODOLOGICAL APPROACHES TO ECONOMIC COMPETITIVENESS (Part II

    Directory of Open Access Journals (Sweden)

    Vadim MACARI

    2013-02-01

    Full Text Available This article, on the basis of the study of many representative bibliographic sources, examines and tries to order from logical and scientific point of view some of the most common theoretical and methodological treatments of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  15. THEORETICAL AND METHODOLOGICAL APPROACHES TO ECONOMIC COMPETITIVENESS (Part I

    Directory of Open Access Journals (Sweden)

    Vadim MACARI

    2013-01-01

    Full Text Available The article is based on study of several representative bibliographical sources,it is tried to examine, to order from logical and scientific point of view some of the most common theoretical and methodological understandings of the essence, definition, phenomenon, types, characteristics and indices of economic competitiveness.

  16. Photogrammetric Resection Approach Using Straight Line Features for Estimation of Cartosat-1 Platform Parameters

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2008-08-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. There are several other available methods using lines, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images are the main drawbacks of the classical approaches. The line based approach to overcome these problems includes usage of lines in the number of observations that can be provided, which improve significantly the overall system redundancy. This paper addresses mathematical model relating to both image and object reference system for solving the space resection problem which is generally used for upgrading the exterior orientation parameters. In order to solve the dynamic camera calibration parameters, a sequential estimator (Kalman Filtering is applied; in an iterative process to the image. For dynamic case, e.g. an image sequence of moving objects, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and basic physical sensor model for each instant of time. The proposed approach is tested with three real data sets and the result suggests that highly accurate space resection parameters can be obtained with or without using the control points and progressive processing time reduction.

  17. A short course in quantum information theory. An approach from theoretical physics. 2. ed.

    International Nuclear Information System (INIS)

    Diosi, Lajos

    2011-01-01

    This short and concise primer takes the vantage point of theoretical physics and the unity of physics. It sets out to strip the burgeoning field of quantum information science to its basics by linking it to universal concepts in physics. An extensive lecture rather than a comprehensive textbook, this volume is based on courses delivered over several years to advanced undergraduate and beginning graduate students, but essentially it addresses anyone with a working knowledge of basic quantum physics. Readers will find these lectures a most adequate entry point for theoretical studies in this field. For the second edition, the authors has succeeded in adding many new topics while sticking to the conciseness of the overall approach. A new chapter on qubit thermodynamics has been added, while new sections and subsections have been incorporated in various chapter to deal with weak and time-continuous measurements, period-finding quantum algorithms and quantum error corrections. From the reviews of the first edition: ''The best things about this book are its brevity and clarity. In around 100 pages it provides a tutorial introduction to quantum information theory, including problems and solutions.. it's worth a look if you want to quickly get up to speed with the language and central concepts of quantum information theory, including the background classical information theory.'' (Craig Savage, Australian Physics, Vol. 44 (2), 2007). (orig.)

  18. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  19. Hybrid empirical--theoretical approach to modeling uranium adsorption

    International Nuclear Information System (INIS)

    Hull, Larry C.; Grossman, Christopher; Fjeld, Robert A.; Coates, John T.; Elzerman, Alan W.

    2004-01-01

    An estimated 330 metric tons of U are buried in the radioactive waste Subsurface Disposal Area (SDA) at the Idaho National Engineering and Environmental Laboratory (INEEL). An assessment of U transport parameters is being performed to decrease the uncertainty in risk and dose predictions derived from computer simulations of U fate and transport to the underlying Snake River Plain Aquifer. Uranium adsorption isotherms were measured for 14 sediment samples collected from sedimentary interbeds underlying the SDA. The adsorption data were fit with a Freundlich isotherm. The Freundlich n parameter is statistically identical for all 14 sediment samples and the Freundlich K f parameter is correlated to sediment surface area (r 2 =0.80). These findings suggest an efficient approach to material characterization and implementation of a spatially variable reactive transport model that requires only the measurement of sediment surface area. To expand the potential applicability of the measured isotherms, a model is derived from the empirical observations by incorporating concepts from surface complexation theory to account for the effects of solution chemistry. The resulting model is then used to predict the range of adsorption conditions to be expected in the vadose zone at the SDA based on the range in measured pore water chemistry. Adsorption in the deep vadose zone is predicted to be stronger than in near-surface sediments because the total dissolved carbonate decreases with depth

  20. Representing electrons a biographical approach to theoretical entities

    CERN Document Server

    Arabatzis, Theodore

    2006-01-01

    Both a history and a metahistory, Representing Electrons focuses on the development of various theoretical representations of electrons from the late 1890s to 1925 and the methodological problems associated with writing about unobservable scientific entities. Using the electron-or rather its representation-as a historical actor, Theodore Arabatzis illustrates the emergence and gradual consolidation of its representation in physics, its career throughout old quantum theory, and its appropriation and reinterpretation by chemists. As Arabatzis develops this novel biographical

  1. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    Science.gov (United States)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  2. Theoretical clarity is not “Manicheanism”

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2011-01-01

    It is argued that in order to establish a new theoretical approach to information science it is necessary to express disagreement with some established views. The “social turn” in information science is not just exemplified in relation to the works of Marcia Bates but in relation to many different...... researchers in the field. Therefore it should not be taken personally, and the debate should focus on the substance. Marcia Bates has contributed considerably to information science. In spite of this some of her theoretical points of departure may be challenged. It is important to seek theoretical clarity...... and this may involve a degree of schematic confrontation that should not be confused with theoretical one-sidedness, “Manicheanism” or lack of respect....

  3. Concept of the Cooling System of the ITS for ALICE: Technical Proposals, Theoretical Estimates, Experimental Results

    CERN Document Server

    Godisov, O N; Yudkin, M I; Gerasimov, S F; Feofilov, G A

    1994-01-01

    Contradictory demands raised by the application of different types of sensitive detectors in 5 layers of the Inner Tracking System (ITS) for ALICE stipulate the simultaneous use of different schemes of heat drain: gaseous cooling of the 1st layer (uniform heat production over the sensitive surface) and evaporative cooling for the 2nd-5th layers (localised heat production). The last system is also a must for the thermostabilization of Si-drift detectors within 0.1 degree C. Theoretical estimates of gaseous, evaporative and liquid cooling systems are done for all ITS layers. The results of the experiments done for evaporative and liquid heat drain systems are presented and discussed. The major technical problems of the evaporative systems' design are being considered: i) control of liquid supply; ii) vapour pressure control. Two concepts of the evaporative systems are proposed: 1) One channel system for joint transfer of two phases (liquid + gas); 2) Two channels system with separate transfer of phases. Both sy...

  4. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  5. Nuclear energy policy analysis under uncertainties : applications of new utility theoretic approaches

    International Nuclear Information System (INIS)

    Ra, Ki Yong

    1992-02-01

    For the purpose of analyzing the nuclear energy policy under uncertainties, new utility theoretic approaches were applied. The main discoveries of new utility theories are that, firstly, the consequences can affect the perceived probabilities, secondly, the utilities are not fixed but can change, and finally, utilities and probabilities thus should be combined dependently to determine the overall worth of risky option. These conclusions were applied to develop the modified expected utility model and to establish the probabilistic nuclear safety criterion. The modified expected utility model was developed in order to resolve the inconsistencies between the expected utility model and the actual decision behaviors. Based on information theory and Bayesian inference, the modified probabilities were obtained as the stated probabilities times substitutional factors. The model theoretically predicts that the extreme value outcomes are perceived as to be more likely to occur than medium value outcomes. This prediction is consistent with the first finding of new utility theories that the consequences can after the perceived probabilities. And further with this theoretical prediction, the decision behavior of buying lottery ticket, of paying for insurance and of nuclear catastrophic risk aversion can well be explained. Through the numerical application, it is shown that the developed model can well explain the common consequence effect, common ratio effect and reflection effect. The probabilistic nuclear safety criterion for core melt frequency was established: Firstly, the distribution of the public's safety goal (DPSG) was proposed for representing the public's group preference under risk. Secondly, a new probabilistic safety criterion (PSC) was established, in which the DPSG was used as a benchmark for evaluating the results of probabilistic safety assessment. Thirdly, a log-normal distribution was proposed as the appropriate DPSG for core melt frequency using the

  6. Theoretical estimation of proton induced X-ray emission yield of the trace elements present in the lung and breast cancer

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Sowmya, N.

    2013-01-01

    X-rays may be produced following the excitation of target atoms induced by an energetic incident ion beam of protons. Proton induced X-ray emission (PIXE) analysis has been used for many years for the determination of elemental composition of materials using X-rays. Recent interest in the proton induced X-ray emission cross section has arisen due to their importance in the rapidly expanding field of PIXE analysis. One of the steps in the analysis is to fit the measured X-ray spectrum with theoretical spectrum. The theoretical cross section and yields are essential for the evaluation of spectrum. We have theoretically evaluated the PIXE cross sections for trace elements in the lung and breast cancer tissues such as Cl, K, Ca,Ti, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Rb, P, S, Sr, Hg and Pb. The estimated cross section is used in the evaluation of Proton induced X-ray emission spectrum for the given trace elements.We have also evaluated the Proton induced X-ray emission yields in the thin and thick target of the given trace elements. The evaluated Proton induced X-ray emission cross-section, spectrum and yields are graphically represented. Some of these values are also tabulated. Proton induced X-ray emission cross sections and a yield for the given trace elements varies with the energy. PIXE yield depends on a real density and does not on thickness of the target. (author)

  7. The estimated economic burden of genital herpes in the United States. An analysis using two costing approaches

    Directory of Open Access Journals (Sweden)

    Fisman David N

    2001-06-01

    Full Text Available Abstract Background Only limited data exist on the costs of genital herpes (GH in the USA. We estimated the economic burden of GH in the USA using two different costing approaches. Methods The first approach was a cross-sectional survey of a sample of primary and secondary care physicians, analyzing health care resource utilization. The second approach was based on the analysis of a large administrative claims data set. Both approaches were used to generate the number of patients with symptomatic GH seeking medical treatment, the average medical expenditures and estimated national costs. Costs were valued from a societal and a third party payer's perspective in 1996 US dollars. Results In the cross-sectional study, based on an estimated 3.1 million symptomatic episodes per year in the USA, the annual direct medical costs were estimated at a maximum of $984 million. Of these costs, 49.7% were caused by drug expenditures, 47.7% by outpatient medical care and 2.6% by hospital costs. Indirect costs accounted for further $214 million. The analysis of 1,565 GH cases from the claims database yielded a minimum national estimate of $283 million direct medical costs. Conclusions GH appears to be an important public health problem from the health economic point of view. The observed difference in direct medical costs may be explained with the influence of compliance to treatment and possible undersampling of subpopulations in the claims data set. The present study demonstrates the validity of using different approaches in estimating the economic burden of a specific disease to the health care system.

  8. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  9. Theoretical Bases of the Model of Interaction of the Government and Local Government Creation

    Directory of Open Access Journals (Sweden)

    Nikolay I. Churinov

    2015-09-01

    Full Text Available Article is devoted to questions of understanding of a theoretical component: systems of interaction of bodies of different levels of the government. Author researches historical basis of the studied subject by research of foreign and domestic scientific experience in area of the theory of the state and the law. Much attention is paid to the scientific aspect of the question. By empirical approach interpretation of the theory of interaction of public authorities and local government, and also subjective estimated opinion of the author is given.

  10. Group theoretic approach for solving the problem of diffusion of a drug through a thin membrane

    Science.gov (United States)

    Abd-El-Malek, Mina B.; Kassem, Magda M.; Meky, Mohammed L. M.

    2002-03-01

    The transformation group theoretic approach is applied to study the diffusion process of a drug through a skin-like membrane which tends to partially absorb the drug. Two cases are considered for the diffusion coefficient. The application of one parameter group reduces the number of independent variables by one, and consequently the partial differential equation governing the diffusion process with the boundary and initial conditions is transformed into an ordinary differential equation with the corresponding conditions. The obtained differential equation is solved numerically using the shooting method, and the results are illustrated graphically and in tables.

  11. The production of scientific videos: a theoretical approach

    Directory of Open Access Journals (Sweden)

    Carlos Ernesto Gavilondo Rodriguez

    2016-12-01

    Full Text Available The article presents the results of theoretical research on the production of scientific videos and its application to the teaching-learning process carried out in schools in the city of Guayaquil, Ecuador. It is located within the production line and Audiovisual Communication. Creation of scientific videos, from the Communication major with a concentration in audiovisual production and multimedia of the Salesian Polytechnic University. For the realization of the article it was necessary to use key terms that helped subsequently to data collection. used terms such as: audiovisual production, understood as the production of content for audiovisual media; the following term used audiovisual communication is recognized as the process in which there is an exchange of messages through an audible and / or visual system; and the last term we use is scientifically video, which is one that uses audiovisual resources to obtain relevant and reliable information.As part of the theoretical results a methodological proposal for the video production is presented for educational purposes. In conclusion set out, first, that from the communicative statement in recent times, current social relations, constitute a successful context of possibilities shown to education to generate meeting points between the world of the everyday and the knowledge. Another indicator validated as part of the investigation, is that teachers surveyed use the potential of the audiovisual media, and supported them, deploy alternatives for use. 

  12. A Theoretical and Methodological Evaluation of Leadership Research.

    Science.gov (United States)

    Lashbrook, Velma J.; Lashbrook, William B.

    This paper isolates some of the strengths and weaknesses of leadership research by evaluating it from both a theoretical and methodological perspective. The seven theories or approaches examined are: great man, trait, situational, style, functional, social influence, and interaction positions. General theoretical, conceptual, and measurement…

  13. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  14. Stress transfer from pile group in saturated and unsaturated soil using theoretical and experimental approaches

    Directory of Open Access Journals (Sweden)

    al-Omari Raid R.

    2017-01-01

    Full Text Available Piles are often used in groups, and the behavior of pile groups under the applied loads is generally different from that of single pile due to the interaction of neighboring piles, therefore, one of the main objectives of this paper is to investigate the influence of pile group (bearing capacity, load transfer sharing for pile shaft and tip in comparison to that of single piles. Determination of the influence of load transfer from the pile group to the surrounding soil and the mechanism of this transfer with increasing the load increment on the tip and pile shaft for the soil in saturated and unsaturated state (when there is a negative pore water pressure. Different basic properties are used that is (S = 90%, γd = 15 kN / m3, S = 90%, γd = 17 kN / m3 and S = 60%, γd =15 kN / m3. Seven model piles were tested, these was: single pile (compression and pull out test, 2×1, 3×1, 2×2, 3×2 and 3×3 group. The stress was measured with 5 cm diameter soil pressure transducer positioned at a depth of 5 cm below the pile tip for all pile groups. The measured stresses below the pile tip using a soil pressure transducer positioned at a depth of 0.25L (where L is the pile length below the pile tip are compared with those calculated using theoretical and conventional approaches. These methods are: the conventional 2V:1H method and the method used the theory of elasticity. The results showed that the method of measuring the soil stresses with soil pressure transducer adopted in this study, gives in general, good results of stress transfer compared with the results obtained from the theoretical and conventional approaches.

  15. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  16. Towards integrating control and information theories from information-theoretic measures to control performance limitations

    CERN Document Server

    Fang, Song; Ishii, Hideaki

    2017-01-01

    This book investigates the performance limitation issues in networked feedback systems. The fact that networked feedback systems consist of control and communication devices and systems calls for the integration of control theory and information theory. The primary contributions of this book lie in two aspects: the newly-proposed information-theoretic measures and the newly-discovered control performance limitations. We first propose a number of information notions to facilitate the analysis. Using those notions, classes of performance limitations of networked feedback systems, as well as state estimation systems, are then investigated. In general, the book presents a unique, cohesive treatment of performance limitation issues of networked feedback systems via an information-theoretic approach. This book is believed to be the first to treat the aforementioned subjects systematically and in a unified manner, offering a unique perspective differing from existing books.

  17. Analysing Buyers' and Sellers' Strategic Interactions in Marketplaces: An Evolutionary Game Theoretic Approach

    Science.gov (United States)

    Vytelingum, Perukrishnen; Cliff, Dave; Jennings, Nicholas R.

    We develop a new model to analyse the strategic behaviour of buyers and sellers in market mechanisms. In particular, we wish to understand how the different strategies they adopt affect their economic efficiency in the market and to understand the impact of these choices on the overall efficiency of the marketplace. To this end, we adopt a two-population evolutionary game theoretic approach, where we consider how the behaviours of both buyers and sellers evolve in marketplaces. In so doing, we address the shortcomings of the previous state-of-the-art analytical model that assumes that buyers and sellers have to adopt the same mixed strategy in the market. Finally, we apply our model in one of the most common market mechanisms, the Continuous Double Auction, and demonstrate how it allows us to provide new insights into the strategic interactions of such trading agents.

  18. Artificial neural network approach to spatial estimation of wind velocity data

    International Nuclear Information System (INIS)

    Oztopal, Ahmet

    2006-01-01

    In any regional wind energy assessment, equal wind velocity or energy lines provide a common basis for meaningful interpretations that furnish essential information for proper design purposes. In order to achieve regional variation descriptions, there are methods of optimum interpolation with classical weighting functions or variogram methods in Kriging methodology. Generally, the weighting functions are logically and geometrically deduced in a deterministic manner, and hence, they are imaginary first approximations for regional variability assessments, such as wind velocity. Geometrical weighting functions are necessary for regional estimation of the regional variable at a location with no measurement, which is referred to as the pivot station from the measurements of a set of surrounding stations. In this paper, weighting factors of surrounding stations necessary for the prediction of a pivot station are presented by an artificial neural network (ANN) technique. The wind speed prediction results are compared with measured values at a pivot station. Daily wind velocity measurements in the Marmara region from 1993 to 1997 are considered for application of the ANN methodology. The model is more appropriate for winter period daily wind velocities, which are significant for energy generation in the study area. Trigonometric point cumulative semivariogram (TPCSV) approach results are compared with the ANN estimations for the same set of data by considering the correlation coefficient (R). Under and over estimation problems in objective analysis can be avoided by the ANN approach

  19. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  20. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Science.gov (United States)

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  1. Domino effects within a chemical cluster: a game-theoretical modeling approach by using Nash-equilibrium.

    Science.gov (United States)

    Reniers, Genserik; Dullaert, Wout; Karel, Soudan

    2009-08-15

    Every company situated within a chemical cluster faces domino effect risks, whose magnitude depends on every company's own risk management strategies and on those of all others. Preventing domino effects is therefore very important to avoid catastrophes in the chemical process industry. Given that chemical companies are interlinked by domino effect accident links, there is some likelihood that even if certain companies fully invest in domino effects prevention measures, they can nonetheless experience an external domino effect caused by an accident which occurred in another chemical enterprise of the cluster. In this article a game-theoretic approach to interpret and model behaviour of chemical plants within chemical clusters while negotiating and deciding on domino effects prevention investments is employed.

  2. The current status of theoretically based approaches to the prediction of the critical heat flux in flow boiling

    International Nuclear Information System (INIS)

    Weisman, J.

    1991-01-01

    This paper reports on the phenomena governing the critical heat flux in flow boiling. Inducts which vary with the flow pattern. Separate models are needed for dryout in annular flow, wall overheating in plug or slug flow and formation of a vapor blanket in dispersed flow. The major theories and their current status are described for the annular and dispersed regions. The need for development of the theoretical approach in the plug and slug flow region is indicated

  3. Dynamic Load on a Pipe Caused by Acetylene Detonations – Experiments and Theoretical Approaches

    Directory of Open Access Journals (Sweden)

    Axel Sperber

    1999-01-01

    Full Text Available The load acting on the wall of a pipe by a detonation, which is travelling through, is not yet well characterized. The main reasons are the limited amount of sufficiently accurate pressure time history data and the requirement of considering the dynamics of the system. Laser vibrometry measurements were performed to determine the dynamic response of the pipe wall on a detonation. Different modelling approaches were used to quantify, theoretically, the radial displacements of the pipe wall. There is good agreement between measured and predicted values of vibration frequencies and the propagation velocities of transverse waves. Discrepancies mainly due to wave propagation effects were found in the amplitudes of the radial velocities. They might be overcome by the use of a dynamic load factor or improved modelling methods.

  4. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  5. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  6. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  7. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  8. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  9. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    David Cabedo, J.; Moya, Ismael

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach, developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification

  10. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    Cabedo, J.D.; Moya, I.

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach. developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification. (author)

  11. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  12. Resource Allocation for Multicell Device-to-Device Communications in Cellular Network: A Game Theoretic Approach

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2015-08-01

    Full Text Available Device-to-Device (D2D communication has recently emerged as a promising technology to improve the capacity and coverage of cellular systems. To successfully implement D2D communications underlaying a cellular network, resource allocation for D2D links plays a critical role. While most of prior resource allocation mechanisms for D2D communications have focused on interference within a single-cell system, this paper investigates the resource allocation problem for a multicell cellular network in which a D2D link reuses available spectrum resources of multiple cells. A repeated game theoretic approach is proposed to address the problem. In this game, the base stations (BSs act as players that compete for resource supply of D2D, and the utility of each player is formulated as revenue collected from both cellular and D2D users using resources. Extensive simulations are conducted to verify the proposed approach and the results show that it can considerably enhance the system performance in terms of sum rate and sum rate gain.

  13. Distinguishing prognostic and predictive biomarkers: An information theoretic approach.

    Science.gov (United States)

    Sechidis, Konstantinos; Papangelou, Konstantinos; Metcalfe, Paul D; Svensson, David; Weatherall, James; Brown, Gavin

    2018-05-02

    The identification of biomarkers to support decision-making is central to personalised medicine, in both clinical and research scenarios. The challenge can be seen in two halves: identifying predictive markers, which guide the development/use of tailored therapies; and identifying prognostic markers, which guide other aspects of care and clinical trial planning, i.e. prognostic markers can be considered as covariates for stratification. Mistakenly assuming a biomarker to be predictive, when it is in fact largely prognostic (and vice-versa) is highly undesirable, and can result in financial, ethical and personal consequences. We present a framework for data-driven ranking of biomarkers on their prognostic/predictive strength, using a novel information theoretic method. This approach provides a natural algebra to discuss and quantify the individual predictive and prognostic strength, in a self-consistent mathematical framework. Our contribution is a novel procedure, INFO+, which naturally distinguishes the prognostic vs predictive role of each biomarker and handles higher order interactions. In a comprehensive empirical evaluation INFO+ outperforms more complex methods, most notably when noise factors dominate, and biomarkers are likely to be falsely identified as predictive, when in fact they are just strongly prognostic. Furthermore, we show that our methods can be 1-3 orders of magnitude faster than competitors, making it useful for biomarker discovery in 'big data' scenarios. Finally, we apply our methods to identify predictive biomarkers on two real clinical trials, and introduce a new graphical representation that provides greater insight into the prognostic and predictive strength of each biomarker. R implementations of the suggested methods are available at https://github.com/sechidis. konstantinos.sechidis@manchester.ac.uk. Supplementary data are available at Bioinformatics online.

  14. SUSTAINABLE TOURISM AND ITS FORMS - A THEORETICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Bac Dorin

    2013-07-01

    Full Text Available From the second half of the twentieth century, the importance of the tourism industry to the world economy continued to grow, reaching today impressive figures: receipts of almost $ 1,000 billion and direct employment for over 70 million people (WTTC 2012, without taking into account the multiplier effect (according to the same statistics of WTTC, if considering the multiplier effect, the values are: $ 5,990 billion in tourism receipts, and 253.5 million jobs. We can say that tourism: has a higher capacity to generate and distribute incomes compared to other sectors; has a high multiplier effect; determines a high level of consumption of varied products and services. In this context, voices began to emerge, which presented the problems and challenges generated by the tourism activity. Many regions are facing real problems generated by tourism entrepreneurs and tourists who visit the community. Therefore, at the end of the last century, there were authors who sought to define a new form of tourism, which eliminated the negative impacts and increased the positive ones. As a generic term they used alternative tourism, but because of the ambiguity of the term, they tried to find a more precise term, which would define the concept easier. Thus emerged: ecotourism, rural tourism, Pro Poor Tourism etc.. All these forms have been introduced under the umbrella concept of sustainable tourism. In the present paper we will take a theoretical approach, in order to present some forms of sustainable tourism. During our research we covered the ideas and concepts promoted by several authors and academics but also some international organizations with focus on tourism. We considered these forms of tourism, as they respect all the rules of sustainable tourism and some of them have great potential to grow in both developed and emerging countries. The forms of sustainable tourism we identified are: ecotourism, pro-poor tourism, volunteer tourism and slow tourism. In

  15. THEORETICAL APPROACHES TO THE DEFINITION OF MOTIVATION OF PROFESSIONAL ACTIVITY OF PUBLIC SERVANTS

    Directory of Open Access Journals (Sweden)

    E. V. Vashalomidze

    2016-01-01

    Full Text Available The relevance of the topic chosen due to the presence of performance motivation development problems of civil servants, including their motivation for continuous professional development, as one of the main directions of development of the civil service in general, approved by the relevant Presidential Decree on 2016–2018 years. In the first part of this article provides a brief analytical overview and an assessment of content and process of theoretical and methodological approaches to solving the problems of motivation of the personnel of socio-economic systems. In the second part of the article on the basis of the research proposed motivating factors in the development of the approaches set out in the first part of the article.The purpose / goal. The aim of the article is to provide methodological assistance to academic institutions involved in the solution of scientific and practical problems of motivation of civil servants to the continuous professional development in accordance with the Presidential Decree of 11 August 2016 № 408.Methodology. The methodological basis of this article are: a comprehensive analysis of normative legal provision of state of the Russian Federation; systematic approach and historical analysis of the theory and methodology of solving problems of staff motivation; method of expert evaluations; the scientific method of analogies.Conclusions / relevance. The practical significance of the article is in the operational delivery of the scientific and methodological assistance to the implementation of the Russian Federation "On the main directions of the state civil service of the Russian Federation in the years 2016–2018" Presidential Decree of 11 August number 403 regarding the establishment of mechanisms to motivate civil servants to continuous professional development.

  16. The Formation of Instruments of Management of Industrial Enterprises According to the Theoretical and Functional Approaches

    Directory of Open Access Journals (Sweden)

    Raiko Diana V.

    2018-03-01

    Full Text Available The article is aimed at the substantiation based on the analysis of the company theories of the basic theoretical provisions on the formation of industrial enterprise management instruments. The article determines that the subject of research in theories is enterprise, the object is the process of management of potential according to the forms of business organization and technology of partnership relations, the goal is high financial results, stabilization of the activity, and social responsibility. The publication carries out an analysis of enterprise theories on the determining of its essence as a socio-economic system in the following directions: technical preparation of production, economic theory and law, theory of systems, marketing-management. As a result of the research, the general set of functions has been identified – the socio-economic functions of enterprise by groups: information-legal, production, marketing-management, social responsibility. When building management instruments, it is suggested to take into consideration the direct and inverse relationships of enterprise at all levels of management – micro, meso and macro. On this ground, the authors have developed provisions on formation of instruments of management of industrial enterprises according to two approachestheoretical and functional.

  17. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  18. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  19. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  20. Theoretical approaches to digital services and digital democracy

    DEFF Research Database (Denmark)

    Hoff, Jens Villiam; Scheele, Christian Elling

    2014-01-01

    The purpose of this paper is to develop a theoretical framework, which can be used in the analysis of all types of (political-administrative) web applications. Through a discussion and criticism of social construction of technology (SCOT), an earlier version of this model based on new medium theory...... are translated into specific (political-administrative) practices, and how these practices are produced through the interplay between discourses, actors and technology. However, the new version places practices more firmly at the centre of the model, as practices, following Reckwitz, is seen as the enactment...

  1. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  2. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  3. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  4. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  5. A hybrid system approach to airspeed, angle of attack and sideslip estimation in Unmanned Aerial Vehicles

    KAUST Repository

    Shaqura, Mohammad; Claudel, Christian

    2015-01-01

    , low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip

  6. Broadening the Study of Participation in the Life Sciences: How Critical Theoretical and Mixed-Methodological Approaches Can Enhance Efforts to Broaden Participation

    Science.gov (United States)

    Metcalf, Heather

    2016-01-01

    This research methods Essay details the usefulness of critical theoretical frameworks and critical mixed-methodological approaches for life sciences education research on broadening participation in the life sciences. First, I draw on multidisciplinary research to discuss critical theory and methodologies. Then, I demonstrate the benefits of these…

  7. Game theoretic analysis of physical protection system design

    International Nuclear Information System (INIS)

    Canion, B.; Schneider, E.; Bickel, E.; Hadlock, C.; Morton, D.

    2013-01-01

    The physical protection system (PPS) of a fictional small modular reactor (SMR) facility have been modeled as a platform for a game theoretic approach to security decision analysis. To demonstrate the game theoretic approach, a rational adversary with complete knowledge of the facility has been modeled attempting a sabotage attack. The adversary adjusts his decisions in response to investments made by the defender to enhance the security measures. This can lead to a conservative physical protection system design. Since defender upgrades were limited by a budget, cost benefit analysis may be conducted upon security upgrades. One approach to cost benefit analysis is the efficient frontier, which depicts the reduction in expected consequence per incremental increase in the security budget

  8. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  9. Cartography, new technologies and geographic education: theoretical approaches to research the field

    Science.gov (United States)

    Seneme do Canto, Tânia

    2018-05-01

    In order to understand the roles that digital mapping can play in cartographic and geographic education, this paper discusses the theoretical and methodological approach used in a research that is undertaking in the education of geography teachers. To develop the study, we found in the works of Lankshear and Knobel (2013) a notion of new literacies that allows us looking at the practices within digital mapping in a sociocultural perspective. From them, we conclude that in order to understand the changes that digital cartography is able to foment in geography teaching, it is necessary to go beyond the substitution of means in the classroom and being able to explore what makes the new mapping practices different from others already consolidated in geography teaching. Therefore, we comment on some features of new forms of cartographic literacy that are in full development with digital technologies, but which are not determined solely by their use. The ideas of Kitchin and Dodge (2007) and Del Casino Junior and Hanna (2006) are also an important reference for the research. Methodologically, this approach helps us to understand that in the seek to comprehend maps and their meanings, irrespective of the medium used, we are dealing with a process of literacy that is very particular and emergent because it involves not only the characteristics of the map artifact and of the individual that produces or consumes it, but depends mainly on a diversity of interconnections that are being built between them (map and individual) and the world.

  10. Set-membership estimations for the evolution of infectious diseases in heterogeneous populations.

    Science.gov (United States)

    Tsachev, Tsvetomir; Veliov, Vladimir M; Widder, Andreas

    2017-04-01

    The paper presents an approach for set-membership estimation of the state of a heterogeneous population in which an infectious disease is spreading. The population state may consist of susceptible, infected, recovered, etc. groups, where the individuals are heterogeneous with respect to traits, relevant to the particular disease. Set-membership estimations in this context are reasonable, since only vague information about the distribution of the population along the space of heterogeneity is available in practice. The presented approach comprises adapted versions of methods which are known in estimation and control theory, and involve solving parametrized families of optimization problems. Since the models of disease spreading in heterogeneous populations involve distributed systems (with non-local dynamics and endogenous boundary conditions), these problems are non-standard. The paper develops the needed theoretical instruments and a solution scheme. SI and SIR models of epidemic diseases are considered as case studies and the results reveal qualitative properties that may be of interest.

  11. Estimation of Snow Parameters from Dual-Wavelength Airborne Radar

    Science.gov (United States)

    Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew

    1997-01-01

    Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.

  12. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Roh, Changhyun; Komarova, Ludmila N.; Petin, Vladislav G.

    2013-01-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  13. Combined action of ionizing radiation with another factor: common rules and theoretical approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu; Roh, Changhyun, E-mail: jkkim@kaeri.re.kr [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Komarova, Ludmila N.; Petin, Vladislav G., E-mail: vgpetin@yahoo.com [Medical Radiological Research Center, Obninsk (Russian Federation)

    2013-07-01

    Two or more factors can simultaneously make their combined effects on the biological objects. This study has focused on theoretical approach to synergistic interaction due to the combined action of radiation and another factor on cell inactivation. A mathematical model for the synergistic interaction of different environmental agents was suggested for quantitative prediction of irreversibly damaged cells after combined exposures. The model takes into account the synergistic interaction of agents and based on the supposition that additional effective damages responsible for the synergy are irreversible and originated from an interaction of ineffective sub lesions. The experimental results regarding the irreversible component of radiation damage of diploid yeast cells simultaneous exposed to heat with ionizing radiation or UV light are presented. A good agreement of experimental results with model predictions was demonstrated. The importance of the results obtained for the interpretation of the mechanism of synergistic interaction of various environmental factors is discussed. (author)

  14. Cross-cultural undergraduate medical education in North America: theoretical concepts and educational approaches.

    Science.gov (United States)

    Reitmanova, Sylvia

    2011-04-01

    Cross-cultural undergraduate medical education in North America lacks conceptual clarity. Consequently, school curricula are unsystematic, nonuniform, and fragmented. This article provides a literature review about available conceptual models of cross-cultural medical education. The clarification of these models may inform the development of effective educational programs to enable students to provide better quality care to patients from diverse sociocultural backgrounds. The approaches to cross-cultural health education can be organized under the rubric of two specific conceptual models: cultural competence and critical culturalism. The variation in the conception of culture adopted in these two models results in differences in all curricular components: learning outcomes, content, educational strategies, teaching methods, student assessment, and program evaluation. Medical schools could benefit from more theoretical guidance on the learning outcomes, content, and educational strategies provided to them by governing and licensing bodies. More student assessments and program evaluations are needed in order to appraise the effectiveness of cross-cultural undergraduate medical education.

  15. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    Science.gov (United States)

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  16. Understanding small biomolecule-biomaterial interactions: a review of fundamental theoretical and experimental approaches for biomolecule interactions with inorganic surfaces.

    Science.gov (United States)

    Costa, Dominique; Garrain, Pierre-Alain; Baaden, Marc

    2013-04-01

    Interactions between biomolecules and inorganic surfaces play an important role in natural environments and in industry, including a wide variety of conditions: marine environment, ship hulls (fouling), water treatment, heat exchange, membrane separation, soils, mineral particles at the earth's surface, hospitals (hygiene), art and buildings (degradation and biocorrosion), paper industry (fouling) and more. To better control the first steps leading to adsorption of a biomolecule on an inorganic surface, it is mandatory to understand the adsorption mechanisms of biomolecules of several sizes at the atomic scale, that is, the nature of the chemical interaction between the biomolecule and the surface and the resulting biomolecule conformations once adsorbed at the surface. This remains a challenging and unsolved problem. Here, we review the state of art in experimental and theoretical approaches. We focus on metallic biomaterial surfaces such as TiO(2) and stainless steel, mentioning some remarkable results on hydroxyapatite. Experimental techniques include atomic force microscopy, surface plasmon resonance, quartz crystal microbalance, X-ray photoelectron spectroscopy, fluorescence microscopy, polarization modulation infrared reflection absorption spectroscopy, sum frequency generation and time of flight secondary ion mass spectroscopy. Theoretical models range from detailed quantum mechanical representations to classical forcefield-based approaches. Copyright © 2012 Wiley Periodicals, Inc.

  17. A Hybrid Chaotic and Number Theoretic Approach for Securing DICOM Images

    Directory of Open Access Journals (Sweden)

    Jeyamala Chandrasekaran

    2017-01-01

    Full Text Available The advancements in telecommunication and networking technologies have led to the increased popularity and widespread usage of telemedicine. Telemedicine involves storage and exchange of large volume of medical records for remote diagnosis and improved health care services. Images in medical records are characterized by huge volume, high redundancy, and strong correlation among adjacent pixels. This research work proposes a novel idea of integrating number theoretic approach with Henon map for secure and efficient encryption. Modular exponentiation of the primitive roots of the chosen prime in the range of its residual set is employed in the generation of two-dimensional array of keys. The key matrix is permuted and chaotically controlled by Henon map to decide the encryption keys for every pixel of DICOM image. The proposed system is highly secure because of the randomness introduced due to the application of modular exponentiation key generation and application of Henon maps for permutation of keys. Experiments have been conducted to analyze key space, key sensitivity, avalanche effect, correlation distribution, entropy, and histograms. The corresponding results confirm the strength of the proposed design towards statistical and differential crypt analysis. The computational requirements for encryption/decryption have been reduced significantly owing to the reduced number of computations in the process of encryption/decryption.

  18. Dimensional accuracy of ceramic self-ligating brackets and estimates of theoretical torsional play.

    Science.gov (United States)

    Lee, Youngran; Lee, Dong-Yul; Kim, Yoon-Ji R

    2016-09-01

    To ascertain the dimensional accuracies of some commonly used ceramic self-ligation brackets and the amount of torsional play in various bracket-archwire combinations. Four types of 0.022-inch slot ceramic self-ligating brackets (upper right central incisor), three types of 0.018-inch ceramic self-ligating brackets (upper right central incisor), and three types of rectangular archwires (0.016 × 0.022-inch beta-titanium [TMA] (Ormco, Orange, Calif), 0.016 × 0.022-inch stainless steel [SS] (Ortho Technology, Tampa, Fla), and 0.019 × 0.025-inch SS (Ortho Technology)) were measured using a stereomicroscope to determine slot widths and wire cross-sectional dimensions. The mean acquired dimensions of the brackets and wires were applied to an equation devised by Meling to estimate torsional play angle (γ). In all bracket systems, the slot tops were significantly wider than the slot bases (P brackets, and Clippy-Cs (Tomy, Futaba, Fukushima, Japan) among the 0.018-inch brackets. The Damon Clear (Ormco) bracket had the smallest dimensional error (0.542%), whereas the 0.022-inch Empower Clear (American Orthodontics, Sheboygan, Wis) bracket had the largest (3.585%). The largest amount of theoretical play is observed using the Empower Clear (American Orthodontics) 0.022-inch bracket combined with the 0.016 × 0.022-inch TMA wire (Ormco), whereas the least amount occurs using the 0.018 Clippy-C (Tomy) combined with 0.016 × 0.022-inch SS wire (Ortho Technology).

  19. Revealing life-history traits by contrasting genetic estimations with predictions of effective population size.

    Science.gov (United States)

    Greenbaum, Gili; Renan, Sharon; Templeton, Alan R; Bouskila, Amos; Saltz, David; Rubenstein, Daniel I; Bar-David, Shirli

    2017-12-22

    Effective population size, a central concept in conservation biology, is now routinely estimated from genetic surveys and can also be theoretically predicted from demographic, life-history, and mating-system data. By evaluating the consistency of theoretical predictions with empirically estimated effective size, insights can be gained regarding life-history characteristics and the relative impact of different life-history traits on genetic drift. These insights can be used to design and inform management strategies aimed at increasing effective population size. We demonstrated this approach by addressing the conservation of a reintroduced population of Asiatic wild ass (Equus hemionus). We estimated the variance effective size (N ev ) from genetic data (N ev =24.3) and formulated predictions for the impacts on N ev of demography, polygyny, female variance in lifetime reproductive success (RS), and heritability of female RS. By contrasting the genetic estimation with theoretical predictions, we found that polygyny was the strongest factor affecting genetic drift because only when accounting for polygyny were predictions consistent with the genetically measured N ev . The comparison of effective-size estimation and predictions indicated that 10.6% of the males mated per generation when heritability of female RS was unaccounted for (polygyny responsible for 81% decrease in N ev ) and 19.5% mated when female RS was accounted for (polygyny responsible for 67% decrease in N ev ). Heritability of female RS also affected N ev ; hf2=0.91 (heritability responsible for 41% decrease in N ev ). The low effective size is of concern, and we suggest that management actions focus on factors identified as strongly affecting Nev, namely, increasing the availability of artificial water sources to increase number of dominant males contributing to the gene pool. This approach, evaluating life-history hypotheses in light of their impact on effective population size, and contrasting

  20. On the information-theoretic approach to G\\"odel's incompleteness theorem

    OpenAIRE

    D'Abramo, Germano

    2002-01-01

    In this paper we briefly review and analyze three published proofs of Chaitin's theorem, the celebrated information-theoretic version of G\\"odel's incompleteness theorem. Then, we discuss our main perplexity concerning a key step common to all these demonstrations.

  1. A group theoretic approach to quantum information

    CERN Document Server

    Hayashi, Masahito

    2017-01-01

    This textbook is the first one addressing quantum information from the viewpoint of group symmetry. Quantum systems have a group symmetrical structure. This structure enables to handle systematically quantum information processing. However, there is no other textbook focusing on group symmetry for quantum information although there exist many textbooks for group representation. After the mathematical preparation of quantum information, this book discusses quantum entanglement and its quantification by using group symmetry. Group symmetry drastically simplifies the calculation of several entanglement measures although their calculations are usually very difficult to handle. This book treats optimal information processes including quantum state estimation, quantum state cloning, estimation of group action and quantum channel etc. Usually it is very difficult to derive the optimal quantum information processes without asymptotic setting of these topics. However, group symmetry allows to derive these optimal solu...

  2. A GAME THEORETIC ANALYSIS OF U.S. RICE EXPORT POLICY: THE CASE OF JAPAN AND KOREA

    OpenAIRE

    Lee, Dae-Seob; Kennedy, P. Lynn

    2002-01-01

    As a result of the Uruguay Round (UR), the impact on the international rice market is dramatic.The major U.S. benefit of the UR has been the access to the Japanese market. However, the U.S. share of this import market has been unstable and the share of Korean rice market is nearly zero prior to February 2002. Econometric estimation and Political Preference Function (PPF) approach are incorporated into a game theoretic analysis to analyze U.S. export potential to Japan and Korea.

  3. The cost of hovering and forward flight in a nectar-feeding bat, Glossophaga soricina, estimated from aerodynamic theory

    DEFF Research Database (Denmark)

    Norberg, U M; Kunz, T H; Steffensen, J F

    1993-01-01

    Energy expenditure during flight in animals can best be understood and quantified when both theoretical and empirical approaches are used concurrently. This paper examines one of four methods that we have used to estimate the cost of flight in a neotropical nectar-feeding bat Glossophaga soricina...... on metabolic power requirements estimated from nectar intake gives a mechanical efficiency of 0.15 for hovering flight and 0.11 for forward flight near the minimum power speed....

  4. Approaches in estimation of external cost for fuel cycles in the ExternE project

    International Nuclear Information System (INIS)

    Afanas'ev, A.A.; Maksimenko, B.N.

    1998-01-01

    The purposes, content and main results of studies realized within the frameworks of the International Project ExternE which is the first comprehensive attempt to develop general approach to estimation of external cost for different fuel cycles based on utilization of nuclear and fossil fuels, as well as on renewable power sources are discussed. The external cost of a fuel cycle is treated as social and environmental expenditures which are not taken into account by energy producers and consumers, i.e. these are expenditures not included into commercial cost nowadays. The conclusion on applicability of the approach suggested for estimation of population health hazards and environmental impacts connected with electric power generation growth (expressed in money or some other form) is made

  5. Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach

    Directory of Open Access Journals (Sweden)

    C. Nabert

    2017-05-01

    Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.

  6. Asymptotic analysis for a simple explicit estimator in Barndorff-Nielsen and Shephard stochastic volatility models

    DEFF Research Database (Denmark)

    Hubalek, Friedrich; Posedel, Petra

    expressions for the asymptotic covariance matrix. We develop in detail the martingale estimating function approach for a bivariate model, that is not a diffusion, but admits jumps. We do not use ergodicity arguments. We assume that both, logarithmic returns and instantaneous variance are observed...... on a discrete grid of fixed width, and the observation horizon tends to infinity. This anaysis is a starting point and benchmark for further developments concerning optimal martingale estimating functions, and for theoretical and empirical investigations, that replace the (actually unobserved) variance process...

  7. THE THEORETICAL AND METHODICAL APPROACH TO AN ASSESSMENT OF A LEVEL OF DEVELOPMENT OF THE ENTERPRISE IN CONDITIONS OF GLOBALIZATION

    Directory of Open Access Journals (Sweden)

    Tatiana Shved

    2016-11-01

    Full Text Available The subject of this article is theoretical, methodical and practical aspects of enterprise development in conditions of globalization. The purpose of this research is to provide theoretical and methodical approach to an assessment of a level of development of the enterprise, which is based on the relationship between the factors and influence, illustrating the effect of the internal and external environment of enterprises functioning, and indicates the level of development of the enterprise. Methodology. Theoretical basis of the study was the examination and rethinking of the main achievements of world and domestic science on the development of enterprises. To achieve the objectives of the research following methods were used: systemic and structural analysis for the formation of methodical approaches to the selection of the factors, influencing the development of enterprises; abstract and logical – for the formulation of conclusions and proposals; the method of valuation and expert assessments to the implementation of the proposed theoretical and methodical approach to an assessment of a level of development of the enterprise in conditions of globalization. Results of the research is the proposed theoretical and methodical to an assessment of a level of development of the enterprise in conditions of globalization, which is associated with the idea of development of the enterprise as a system with inputs–factors, influencing on the development , and outputs – indicators of the level of enterprise development within these factors. So, the chosen factors – resources, financial-economic activity, innovation and investment activities, competition, government influence, and foreign trade. Indicators that express these factors, are capital productivity, labour productivity, material efficiency within the first factor; the profitability of the activity, the coefficient of current assets, the total liquidity coefficient, financial stability

  8. Uncertainty estimates for theoretical atomic and molecular data

    International Nuclear Information System (INIS)

    Chung, H-K; Braams, B J; Bartschat, K; Császár, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering. (topical review)

  9. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  10. Theoretical models for recombination in expanding gas

    International Nuclear Information System (INIS)

    Avron, Y.; Kahane, S.

    1978-09-01

    In laser isotope separation of atomic uranium, one is confronted with the theoretical problem of estimating the concentration of thermally ionized uranium atoms. To investigate this problem theoretical models for recombination in an expanding gas and in the absence of local thermal equilibrium have been constructed. The expansion of the gas is described by soluble models of the hydrodynamic equation, and the recombination by rate equations. General results for the freezing effect for the suitable ranges of the gas parameters are obtained. The impossibility of thermal equilibrium in expanding two-component systems is proven

  11. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  12. Theoretical treatment of equilibrium data and evaluation of diffusion coefficients in extraction of uranium

    Energy Technology Data Exchange (ETDEWEB)

    Manohar, Smitha; Theyyunni, T K [Process Engineering and Systems Development Division, Bhabha Atomic Research Centre, Mumbai (India); Ragunathan, T S [Department of Chemical Engineering, Indian Inst. of Tech., Mumbai (India)

    1994-06-01

    A meaningful approach to the calculation of the performance of solvent extraction contactors in the PUREX process requires a good understanding of the equilibrium distribution of the important constituents, namely uranyl nitrate and nitric acid. Published literature refers to the empirical correlation of the distribution data, generally in the form of polynomials. Attempts are made to describe the distribution data in a form which is specially convenient for numerical computations along with its theoretical significance. Attempts are also made to derive suitable equations which would aid in estimation of diffusion coefficients in the uranyl nitrate-nitric acid-TBP/diluent system. (author). 2 tabs.

  13. Scientific-theoretical research approach to practical theology in ...

    African Journals Online (AJOL)

    All of them work with practical theological hermeneutics. The basic hermeneutic approach of Daniël Louw is widened with an integrated approach by Richard R. Osmer in which practical theology as a hermeneutic discipline also includes the empirical aspect which the action theory approach has contributed to the ...

  14. Merging expert and empirical data for rare event frequency estimation: Pool homogenisation for empirical Bayes models

    International Nuclear Information System (INIS)

    Quigley, John; Hardman, Gavin; Bedford, Tim; Walls, Lesley

    2011-01-01

    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification.

  15. Theoretical approach to study the light particles induced production routes of 22Na

    International Nuclear Information System (INIS)

    Eslami, M.; Kakavand, T.; Mirzaii, M.

    2015-01-01

    Highlights: • Excitation function of 22 Na via thirty-three various reactions. • Various theoretical frameworks along with adjustments are employed in the calculations. • The results are given at energy range from the threshold up to 100 MeV. • The results are compared with each other and corresponding experimental data. - Abstract: To create a roadmap for the industrial-scale production of sodium-22, various production routes of this radioisotope involving light charged-particle-induced reactions at the bombarding energy range of threshold to a maximum of 100 MeV have been calculated. The excitation functions are calculated by using various nuclear models. Reaction pre-equilibrium process calculations have been made in the framework of the hybrid and geometry dependent hybrid models using ALICE/ASH code, and in the framework of the exciton model using TALYS-1.4 code. To calculate the compound nucleus evaporation process, both Weisskopf–Ewing and Hauser–Feshbach theories have been employed. The cross sections have also separately been estimated with five different level density models at the whole projectile energies. A comparison with calculations based on the codes, on one hand, and experimental data, on the other hand, is arranged and discussed

  16. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  17. Estimation of the Maximum Theoretical Productivity of Fed-Batch Bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Bomble, Yannick J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); St. John, Peter C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Crowley, Michael F [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-18

    A key step towards the development of an integrated biorefinery is the screening of economically viable processes, which depends sharply on the yields and productivities that can be achieved by an engineered microorganism. In this study, we extend an earlier method which used dynamic optimization to find the maximum theoretical productivity of batch cultures to explicitly include fed-batch bioreactors. In addition to optimizing the intracellular distribution of metabolites between cell growth and product formation, we calculate the optimal control trajectory of feed rate versus time. We further analyze how sensitive the productivity is to substrate uptake and growth parameters.

  18. Theoretical Fundamentals of Human Factor

    OpenAIRE

    Nicoleta Maria Ienciu

    2012-01-01

    The purpose of this paper is to identify the theoretical approaches presented by the literature on the human factor. In order to achieve such objective we have performed a qualitative research by analyzing the content of several papers published in internationally renowned journals, classified according to the list of journals' ranking provided by the Association of Business Schools (UK), in relation to the theories that have been approached within it. Our findings suggest that from all ident...

  19. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  20. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.